2017-01-06

Compressive Sensing and Sparse Recovery by Justin Romberg

# click the upper-left icon to select videos from the playlist 
source: 谷实验室     2013年10月24日
Compressive Sensing and Sparse Recovery is a short course taught by Professor Justin Romberg during his visit to Tsinghua University from Oct. 14h to Oct. 18th, 2013. Professor Justin Romberg is an Associate Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology.
More details about the course: http://gu.ee.tsinghua.edu.cn/index.ph...
All rights reserved by Professor Justin Romberg.

Lecture 1 (Oct 14th) Basis expansion fundamentals 1:01:42
This lecture talks about some fundamental mathematical background which will be quite useful in the subsequent lectures. It mainly focuses on how to represent the signal of interest as a discrete linear combination of basis signals. For orthogonal basis expansions, the generalized Parseval's theorem shows that all signal processing can be done by manipulating discrete sequences of expansion coefficients. This lecture also talks about the Discrete Cosine Transform (DCT) with its applications in image and video compression, and it ends with non-orthogonal bases and overcomplete frames.
Lecture 2 (Oct 14th) overview of sparsity theory and applications 56:38
This lecture reviews the theory of sparse representations and its applications in image processing. A detailed performance comparison between DCT and wavelets in image approximation is implemented, which reveals the superior of wavelets. With sparse representations, more efficient algorithms can be put forward for applications such as compression, denoising, inverse problems, and acquisition.
Lecture 3 (Oct 14th) sparse approx algorithms 42:54
This lecture introduces the problem of finding a sparse representation of a signal, and gives two very different frameworks of algorithms: greedy algorithms and convex programming. Greedy algorithms are based on the idea of matching pursuit, which aims to select a subset of a dictionary with which to represent a given signal. Convex programming relaxes the combinatorial problem into a closely related convex optimization program, Basis Pursuit, which can be recast as a linear program.
Lecture 4 (Oct 15th) uncertainty principles for unions of bases 1:03:43
This lecture first introduces the uncertainty principle between the identity basis and the Fourier basis, which ensures that sparse combinations of these dictionary elements can only be written one way with a small number of terms. Generally, this uncertainty principle can be characterized by the concept of coherence, which greatly affects the performance of sparse representations under the overcomplete dictionary.
Lecture 5 (Oct 15th) overview of compressive sensing 1:20:18
This lecture gives a high-level survey of compressive sensing and its applications. Compressive sensing is a new sampling theory that leverages compressibility of signals to replace front-end acquisition complexity with back-end computing. The key roles played are the new uncertainty principle and randomness. A bunch of applications are introduced to enhance the understanding of how compressive sensing works.
Lecture 6 (Oct 17th)  1 minimization duality and optimality 52:00
This is the first lecture of a trilogy about l1 minimization for sparse recovery. First, a necessary and sufficient condition for l1 recovery is derived, which is determined only by the set on which the sparse signal is supported and the signs of the elements on this set. Second, a more workable sufficient condition for l1 recovery is derived based on duality and optimality. Some examples are followed to show how to use these conditions in practice.
Lecture 7 (Oct 17th) 1 recovery with the restricted isometry property 44:33
This is the second lecture of a trilogy about l1 minimization for sparse recovery. It is proved that if the sensing matrix is an approximate isometry for all 3S-sparse vectors, l1 recovery is guaranteed to find the S-sparse vector. All these derivations are based on the fact that the cone condition is a sufficient condition for l1 recovery.
Lecture 8 (Oct 17th) stability of L1 recovery 38:22
This is the last lecture of a trilogy about l1 minimization for sparse recovery. It considers how l1 recovery performs in more realistic situations, such as the signal of interest is not sparse and the measurements are noisy. It is shown that the recovery error is bounded by how well the signal is approximated by a sparse signal and the energy of measurement noise.
Lecture 9 (Oct 18th) the RIP for Gaussian matrices 49:09
This lecture focuses on another mathematical fundamental of compressive sensing --- random matrices are restricted isometries. By putting together a bunch of easy results from probability theory, it is proved that an M by N iid Gaussian random matrix is a restricted isometry of size 2S with extraordinarily high probability when M is on the order of S*log(N/S).
Lecture 10 (Oct 18th) low rank recovery and bilinear problems 51:44
This lecture introduces some results on low rank recovery. Matrix completion aims to complete a matrix with observations of only some of its entries. It is shown that those blanks can be filled if the matrix is low rank and its singular vectors are incoherent. A more general problem of recovering a low rank matrix from a set of linear measurements is also considered. Finally, it is demonstrated that the problem of solving bilinear equations can be formulated to low rank recovery.
Lecture 11 (Oct 18th) dynamic l1 recovery 29:10
This lecture introduces a dynamical framework for sparse recovery, which is a recent research of Prof. Romberg and his collaborators. First, it talks about fast updating of solutions of l1 optimization programs in various scenarios, including the underlying signal changes slightly, add and remove measurements, the weights change, and have streaming measurements for an evolving signal. Second, it also introduces systems of nonlinear differential equations that solve l1 and related optimization programs, and it is implemented as continuous-time neural nets.