2017-04-27

Optimization and Decision-Making Under Uncertainty

# click the upper-left icon to select videos from the playlist

source: Simons Institute       2016年9月19日
Optimization and Decision-Making Under Uncertainty
The classic area of online algorithms requires us to make decisions over time as the input is slowly revealed, without (complete) knowledge of the future. This has been widely studied, e.g., in the competitive analysis model ­­­and, in parallel, in the model of regret minimization. Another widely studied setting incorporates stochastic uncertainty about the input; this uncertainty reduces over time, but postponing decisions is either costly or impossible. Problems of interest include stochastic optimization, stochastic scheduling and queueing problems, bandit problems in learning, dynamic auctions in mechanism design, secretary problems, and prophet inequalities. Recent developments have shown connections between these models, with new algorithms that interpolate between these settings and combine different techniques. The goal of the workshop is to bring together researchers working on these topics, from areas such as online algorithms, machine learning, queueing theory, mechanism design, and operations research, to exchange ideas and techniques and forge deeper connections.
For more information, please visit https://simons.berkeley.edu/workshops/uncertainty2016-1.
These presentations were supported in part by an award from the Simons Foundation.

Introducing Decision-Making Under Uncertainty to Medical Research:... 56:15 Don Berry, University of Texas MD Anderson Cancer Center https://simons.berkeley.edu/talks/don...
Optimal A-B Testing 47:53
Stable Marriages in Metric Spaces 30:25
The Simplex and Policy-Iteration Methods are Strongly Polynomial... 52:16
Interpolating Between Stochastic and Worst-case Optimization 33:50
Smarter Tools for (Citi)Bike-Sharing 36:19
How to Predict When Estimation is Hard: Algorithms for Learning on Graphs 51:24
From Predictions to Decisions: Limitations and Possibilities of Optimization from Samples 48:10
Distributed Partial Clustering 35:47
Online Vector Packing 52:14
Clustering Using Pairwise Comparisons 30:32
Online Algorithms for Covering and Packing Problems with Convex Objectives 33:41
Kernel-Based Methods for Bandit Convex Optimization 58:30
Always Valid Inference: Continuous Monitoring of A/B Tests 50:19
Revisiting the Exploration-Exploitation Trade-Off in Bandit Models 31:40
Distribution-Free Models of Social and Information Networks 55:10
Learning in Games with Best-Response Oracles 30:37
Online Optimization and Learning Under Long-Term Convex Constraints and Objectives 34:48
Avoiding Cascading Failures for Time-of-Use Pricing 31:04
Procrastination with Variable Present Bias 33:51
Simple Pricing Schemes for Consumers with Evolving Values 34:18
A Unified Duality Theory for Bayesian Mechanism Design 35:29
Bandits and Agents: How to Incentivize Exploration? 39:07
Online Ad Allocation: Robust Optimization for Repeated Auctions 39:46
Competitive Algorithms from Competitive Equilibria 47:38
Online Flow-Time Optimization 36:13
Online Algorithms with Recourse 32:32
Scheduling with Uncertain Processing Times 34:32
Secretary Problems and Online Selection with A Priori Information 48:06
Optimal Online Algorithms via Linear Scaling 29:39

No comments: