# click the upper-left icon to select videos from the playlist
source: Simons Institute 2017年5月1日
The aim of this workshop is to bring together a broad set of researchers looking at algorithmic questions that arise in machine learning. The primary target areas will be large-scale learning, including algorithms for Bayesian estimation and variational inference, nonlinear and nonparametric function estimation, reinforcement learning, and stochastic processes including diffusion, point processes and MCMC. While many of these methods have been central to statistical modeling and machine learning, recent advances in their scope and applicability lead to basic questions about their computational efficiency. The latter is often linked to modeling assumptions and objectives. The workshop will examine progress and challenges and include a set of tutorials on the state of the art by leading experts.
For more information, please visit https://simons.berkeley.edu/workshops/machinelearning2017-3
These presentations were supported in part by an award from the Simons Foundation.
Variational Inference: Foundations and Innovations 1:05:29 David Blei, Columbia University
https://simons.berkeley.edu/talks/dav...
Representational and Optimization Properties of Deep Residual Networks 44:41
Composing Graphical Models with Neural Networks for Structured Representations and Fast Inference 41:36
Scaling Up Bayesian Inference for Big and Complex Data 46:48
Unbiased Estimation of the Spectral Properties of Large Implicit Matrices 36:23
Stochastic Gradient MCMC for Independent and Dependent Data Sources 53:37
On Gradient-Based Optimization: Accelerated, Distributed, Asynchronous and Stochastic 1:02:06
Sampling Polytopes: From Euclid to Riemann 48:31
Understanding Generalization in Adaptive Data Analysis 38:54
Your Neural Network Can't Learn Mine 39:37
Efficient Distributed Deep Learning Using MXNet 45:18
Machine Learning for Healthcare Data 1:09:46
Machine Learning Combinatorial Optimization Algorithms 50:12
A Cost Function for Similarity-Based Hierarchical Clustering 40:49
The Imitation Learning View of Structured Prediction 45:27
Exponential Computational Improvement by Reduction 29:39
Embedding as a Tool for Algorithm Design 1:08:02
Robust Estimation of Mean and Covariance 35:35
Computational Efficiency and Robust Statistics 46:59
System and Algorithm Co-Design, Theory and Practice, for Distributed Machine Learning 42:11
The Polytope Learning Problem 42:41
Computational Challenges and the Future of ML Panel 1:03:24
Computationally Tractable and Near Optimal Design of Experiments 1:03:16
Learning from Untrusted Data 43:14
The “Tell Me Something New” Model of Computation for Machine Learning 41:35
No comments:
Post a Comment