2016-07-14

Algorithms for Big Data by Jelani Nelson (Harvard University)

# click the upper-left icon to select videos from the playlist

source: Harvard University        2016年7月12日
Algorithms for Big Data (COMPSCI 229r)

Lecture 1  Logistics, course topics, basic tail bounds (Markov, Chebyshev, Chernoff, Bernstein), Morris' algorithm. 1:26:05
Lecture 2  Distinct elements, k-wise independence, geometric subsampling of streams. 1:23:57
Lecture 3  Necessity of randomized/approximate guarantees, linear sketching, AMS sketch, p-stable sketch for p less than 2 1:22:13
Lecture 4  P-stable sketch analysis, Nisan's PRG, ℓp estimation for p larger than 2 via max-stability. 1:21:03
Lecture 5 Analysis of ℓp estimation algorithm via max-stability, deterministic point query via incoherent matrices.  1:23:03
Lecture 6  CountMin sketch, point query, heavy hitters, sparse approximation. 1:25:47
Lecture 7 CountSketch, ℓ0 sampling, graph sketching.  1:19:26
Lecture 8  Amnesic dynamic programming (approximate distance to monotonicity). 1:22:02
Lecture 9  Communication complexity (indexing, gap hamming) + application to median and F0 lower bounds. 1:23:12
Lecture 10 Randomized and approximate F0 lower bounds, disjointness, Fp lower bound, dimensionality reduction (JL lemma).  1:18:47
Lecture 11 Khintchine, decoupling, Hanson-Wright, proof of distributional JL lemma.  1:19:20
Lecture 12  Alon's JL lower bound, beyond worst case analysis: suprema of gaussian processes, Gordon's theorem.  1:30:06
Lecture 13  ORS theorem (distributional JL implies Gordon's theorem), sparse JL. 1:22:13
Lecture 14  Sparse JL proof wrap-up, Fast JL Transform, approximate nearest neighbor.  1:21:38
Lecture 15  Approximate matrix multiplication with Frobenius error via sampling / JL, matrix median trick, subspace embeddings. 1:23:56
Lecture 16  Linear least squares via subspace embeddings, leverage score sampling, non-commutative Khintchine, oblivious subspace embeddings. 1:23:02
Lecture 17  Oblivious subspace embeddings, faster iterative regression, sketch-and-solve regression. 1:17:47
Lecture 18  Low-rank approximation, column-based matrix reconstruction, k-means, compressed sensing. 1:23:14
Lecture 19  RIP and connection to incoherence, basis pursuit, Krahmer-Ward theorem. 1:19:47
Lecture 20  Krahmer-Ward proof, Iterative Hard Thresholding. 1:19:58
Lecture 21  ℓ1/ℓ1 recovery, RIP1, unbalanced expanders, Sequential Sparse Matching Pursuit.  1:24:59
Lecture 22  Matrix completion. 1:25:06
Lecture 23  External memory model: linked list, matrix multiplication, B-tree, buffered repository tree, sorting.  1:22:52
Lecture 24  Competitive paging, cache-oblivious algorithms: matrix multiplication, self-organizing linked list, static B-tree, lazy funnelsort.  1:24:48
Lecture 25 MapReduce: TeraSort, minimum spanning tree, triangle counting.  1:21:17

No comments: