Structured Regularization Summer School - 19-22/06/2017

# You can also click the upper-left icon to select videos from the playlist.

source: Institut Henri Poincaré    2017年6月21日
Vous pouvez nous rejoindre sur les réseaux sociaux pour suivre nos actualités.
Facebook : https://www.facebook.com/InstitutHenr...
Twitter : https://twitter.com/InHenriPoincare
Instagram : https://www.instagram.com/instituthen...
LinkedIn : https://www.linkedin.com/company-beta...

A.Hansen - 1/4 - 19/06/2017 1:34:46 Anders Hansen (Cambridge)
Lectures 1 and 2: Compressed Sensing: Structure and Imaging
Abstract: The above heading is the title of a new book to be published by Cambridge University Press. In these lectures I will cover some of the main issues discussed in this monograph/textbook. In particular, we will discuss how the key to the success of compressed sensing applied in imaging lies in the structure. For example images are not just sparse in an X-let expansion, they have a very specific sparsity structure in levels according to the X-let scales. Similarly, when considering Total Variation, the gradient coefficients are also highly structured. Moreover, in most realistic sampling scenarios, the sampling operator combined with any X-let transform yields a matrix with a very specific coherence structure. The key to successfully use compressed sensing is therefore to understand how to utilise these structures in an optimal way, in particular in the sampling procedure. In addition, as the coherence and sparsity structures have very particular asymptotic behaviour, the performance of compressed sensing varies greatly with dimension, and so does the optimal way of sampling. Fortunately, there is now a developed theory that can guide the user in detail on how to optimise the use of compressed sensing in inverse and imaging problems. I will cover several of the key aspects of the theory accompanied with real-world examples from Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance (NMR), Surface Scattering, Electron Microscopy, Fluorescence Microscopy etc.
Recommended readings: (lectures 1 and 2)
- Chapter 4, 6 and 12 in "A mathematical introduction to compressed sensing" (Foucard/Rauhut)
- Breaking the coherence barrier: A new theory for compressed sensing
- On asymptotic structure in compressed sensing
- Structure dependent sampling in compressed sensing: theoretical guarantees for tight frames
Lectures 3 and 4: On Foundational Computational Problems in l1 and Total Variation Regularisation
Abstract: The use of regularisation techniques such as l1 and Total Variation in Basis Pursuit and Lasso has been a great success in wide areas of mathematics and statistics over the last decades. However, in these lectures I will discuss the following paradox: it is impossible to design algorithms to solve these general problems accurately when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. As a simple number such as root(2) never comes with an exact numerical representation, inaccurate data input is a daily encounter. The impossibility result implies that for any algorithm designed to solve these problems there will be cases where the algorithm fails in the following way: For fixed dimensions and any small accuracy parameter epsilon less than 0, one can choose an arbitrary large time T and find an input such that the algorithm will run for longer than T and still not have reached epsilon accuracy. Moreover, it is impossible to determine when the algorithm should halt to achieve an epsilon accurate solution, and hence the algorithm will never be able to produce an output where one knows that the output is at least epsilon accurate. The largest epsilon for which this failure happens is called the Breakdown-epsilon. For Basis Pursuit and Lasso, the breakdown epsilon is greater than 1/3 even when the input is well conditioned.
The paradox opens up for a new rather delicate classification theory of which problems can be computed accurately. For example, given standard assumptions from sparse recovery, there are algorithms that can compute a solution to Basis Pursuit accurately, however, this is impossible for Lasso and Basis Pursuit with noise parameter delta less than 0. However, one can compute a solution accurately up to the Breakdown-epsilon that tends to zero when delta tends to zero, and coincides with the error bound provided in the theory of sparse recovery. This helps explaining the success of many modern algorithms applied in numerous real-world scenarios, and also explains the cases where algorithms will fail and why.
Recommended readings: (lectures 3 and 4)
- Chapter 3 and 15 in "A mathematical introduction to compressed sensing" (Foucard/Rauhut).
- A first-order primal-dual algorithm for convex problems with applications to imaging
A.Hansen - 2/4 - 19/06/2017 1:27:40
A.Hansen - 3/4 - 20/06/2017 1:31:14
A.Hansen - 4/4 - 20/06/2017 1:24:26
A. Montanari - 1/4 - 19/06/2017 1:34:23
A. Montanari - 2/4 - 20/06/2017 1:27:14
A. Montanari - 3/4 - 21/06/2017 1:32:20
A. Montanari - 4/4 - 22/06/2017 1:29:45
F. Bach - 19/06/2017 1:04:15
C. Fernandez-Granda - 20/06/2017 1:01:37
É. Chouzenoux - 21/06/2017 1:07:15
C. Boyer - 22/06/2017 1:09:14
L. Rosasco - 1/4 - 21/06/2017 1:36:27
L. Rosasco - 2/4 - 21/06/2017 1:29:38
L. Rosasco - 3/4 - 22/06/2017 1:38:19
L. Rosasco - 4/4 - 22/06/2017 1:23:30

No comments: