2018-03-27

Fast Iterative Methods in Optimization


source: Simons Institute     2017年10月2日
Iterative methods have been greatly influential in continuous optimization. In fact, almost all algorithms in that field are iterative in nature. Recently, a confluence of ideas from optimization and theoretical computer science has led to breakthroughs in terms of new understanding and running time bound improvements for some of the classic iterative continuous optimization primitives. In this workshop we explore these advances as well as new directions that they have opened up. Some of the specific topics that this workshop plans to cover are: advanced first-order methods (non-smooth optimization, regularization and preconditioning), structured optimization, fast LP/SDP solvers, advances in interior point methods and fast streaming/sketching techniques. One of the key themes that will be highlighted is how combining the continuous and discrete points of view can often allow one to achieve near-optimal running time bounds.
For more information, please visit https://simons.berkeley.edu/workshops...
These presentations were supported in part by an award from the Simons Foundation.

55:29 Continuous Methods for Discrete Optimization: From Convex Relaxations, to Iterative Schemes...
47:11 Width-independent Iterative Algorithms for Packing and Covering Programs
38:47 Slime Molds and Sparse Recovery
45:55 Michael Cohen and ell_p Regression
44:36 Chordal Graphs and Sparse Semidefinite Optimization
46:26 On the Optimization Landscape of Matrix and Tensor Decomposition Problems
45:27 The State of Contemporary Computing Substrates for Optimization Methods
39:14 Let's Make Block Coordinate Descent Go Fast
45:00 Stochastic, Second-order Black-box Optimization for Step Functions
10 48:22 Algorithmic Tools for Smooth Nonconvex Optimization
11 42:22 Zero-order and Dynamic Sampling Methods for Nonlinear Optimization
12 43:32 Dealing with Linear Constraints via Random Permutation
13 42:05 Randomized Iterative Methods and Complexity for Markov Decision Process
14 49:28 Sketching as a Tool for Fast Algorithm Design
15 49:47 Sublinear Time Low-rank Approximation of Positive Semidefinite Matrices
16 44:16 Hashing-based-estimators for Kernel Density in High Dimensions
17 48:50 Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage
18 51:38 Trends in Large-scale Nonconvex Optimization
19 47:37 Faster Algorithms and New Iterative Methods for Computing the Stationary Distribution
20 34:41 Nonconvex Optimization for High-dimensional Learning: From ReLUs to Submodular Maximization
21 48:12 Will Vanishing Gradients Ever Vanish from Deep Learning?
22 46:46 From Minimum Cut to Submodular Minimization: Leveraging the Decomposable Structure
23 51:40 Natasha 2: Faster Non-convex Optimization Than SGD

No comments: