2017-05-18

Representation Learning

# click the upper-left icon to select videos from the playlist

source: Simons Institute    2017年3月27日
This workshop will focus on dramatic advances in representation and learning taking place in natural language processing, speech and vision. For instance, deep learning can be thought of as a method that combines the tasks of finding a classifier (which we can think of as the top layer of the deep net) with the task of learning a representation (namely, the representation computed at the last-but-one layer).
Developing a theory for such empirical work is an exciting quest, especially since the empirical work draws upon non-convex optimization. The workshop will draw a mix of theorists and practitioners, and the following is a list of sample issues that will be discussed: (a) Which models for representation make more sense than others, and why? (In other words, what patterns in data are they capturing, and how are those patterns useful?) (b) What is an analog of generalization theory for representation learning? Can it lead to a theory of transfer learning to new distributions of inputs? (c) How can we design algorithms for representation learning with provable guarantees? What progress has already been made, and what lessons can we draw from it? (d) How can we learn representations that combine probabilities and logic?
For more information, please visit https://simons.berkeley.edu/workshops/machinelearning2017-2
These presentations were supported in part by an award from the Simons Foundation.

Geometry, Optimization and Generalization in Multilayer Networks 49:19 Nathan Srebro, TTI Chicago. Foundations of Machine Learning
A Non-generative Framework and Convex Relaxations for Unsupervised Learning 36:01
Representations for Language: From Word Embeddings to Sentence Meanings 1:14:06
The Missing Signal 59:31
Representations of Relationships in Images and Text 42:54
Spotlight Talk: A Formalization of Representation Learning 17:44
Spotlight Talk: Semi-Random Units for Learning Neural Networks with Guarantees 18:30
Unsupervised Discovery Through Adversarial Self-Play 41:44
Learning from Unlabeled Video 44:54
Deep Reinforcement Learning 1:11:34
Supersizing Self-Supervision: Learning Perception and Action without Human Supervision 40:42
Failures of Deep Learning 45:44
Continuous State Machines and Grammars for Linguistic Structure Prediction 38:00
Adversarial Perceptual Representation Learning Across Diverse Modalities and Domains 46:57
Evaluating Neural Network Representations Against Human Cognition 1:04:15
Representation Learning for Reading Comprehension 45:01
Tractable Learning in Structured Probability Spaces 40:09
Spotlight Talk: Convolutional Dictionary Learning through Tensor Factorization 22:47
Spotlight Talk: How to Escape Saddle Points Efficiently 16:19
Re-Thinking Representational Learning in Robotics and Music 41:54
Representation Learning of Grounded Language and Knowledge: with and without End-to-End Learning 42:44
Unsupervised Representation Learning 1:09:16
Generalization and Equilibrium in Generative Adversarial Nets (GANs) 43:28
Provably Learning of Noisy-or Networks 41:32
Learning Paraphrastic Representations of Natural Language Sentences 39:44
Formation and Association of Symbolic Memories in the Brain 43:17
Learning Representations for Active Vision 1:06:38
Resilient Representation and Provable Generalization 42:18
Word Representation Learning without unk Assumptions 47:18
Spotlght Talk: Performance Guarantees for Transferring Representations 20:15

No comments: