Reading Group

We are holding an online reading group focusing on modern adaptive experimental design and active learning in the real world. All interested participants are welcome to join!

The reading group will be held on Thursdays at 10am PST/California, 6pm GMT/UK, 7pm CET/Zurich time. To add this to your calendar, click here. To receive information via email, subscribe to our mailing list.

To join, please use the following Zoom link: https://ethz.zoom.us/j/67585775251

Speaker Schedule

  1.  January 12, 2023      Kelly W. Zhang     
  2.  January 19, 2023      Kevin Jamieson      
  3.  January 26, 2023      Raul Astudillo       
  4.  February 2, 2023       Emmanuel Bengio     
  5.  February 23, 2023    Haitham Bou Ammar   
  6.  March 2, 2023        Kevin Tran         
  7.  March 9, 2023        Zi Wang         
  8.  March 16, 2023      Viraj Mehta        
  9.  March 23, 2023      Johannes Kirschner    

Next Talk

Raul Astudillo,   January 26, 2023

Title: Composite Bayesian Optimization for Efficient and Scalable Adaptive Experimentation

Abstract: Experimentation is ubiquitous in science and a key driver of human progress. Many experimentation tasks can be cast as optimization problems with expensive or time-consuming to evaluate objective functions. Bayesian optimization has emerged as a powerful tool for tackling such problems. However, many experimentation tasks arising in high-stakes applications such as materials design and drug discovery are out of the reach of standard approaches. In this talk, I will describe recent advances that aim to address this challenge. In particular, I will focus on how the composite structure of many experimentation tasks can be exploited to improve the efficiency and scalability of Bayesian optimization methods. Finally, I will provide directions for future research toward a general framework for efficient end-to-end adaptive experimental design in complex settings.

Relevant Papers:

Bio: Raul is a Postdoctoral Scholar in the Department of Computing and Mathematical Sciences at Caltech, hosted by Professor Yisong Yue. He obtained his Ph.D. in Operations Research and Information Engineering from Cornell University, working under the supervision of Professor Peter Frazier. Before that, he completed the undergraduate program in Mathematics offered jointly by the University of Guanajuato and the Center for Research in Mathematics in Mexico. In 2021, he was a Visiting Researcher at Meta within the Adaptive Experimentation team led by Eytan Bakshy. Raul’s research interests lie at the intersection between operations research and machine learning, with an emphasis on Bayesian methods for efficient sequential data collection. His work combines principled decision-theoretic foundations with sophisticated machine learning tools to develop frameworks for adaptive experimentation in robotics, materials design, cellular agriculture, and other scientific applications.

Past Talks

Kevin Jamieson,   January 19, 2023

Title: Lessons learned in deploying bandit algorithms

Abstract: Bandit algorithms, and adaptive experimentation more generally, promise the same statistically significant guarantees as, say, non-adaptive A/B testing, but require far fewer trials which results in a savings in time and money. However, such promises hold only under assumptions that rarely hold in practice, and for algorithms that may require unrealistic data interaction patterns. This talk explores this tension through two case studies in deploying state of the art algorithms to a large online experimentation platform and a robotics application in an industrial setting. Problems will be discussed, sensible solutions will be proposed, and opinions will be offered.

Relevant Papers:

Bio: Kevin Jamieson is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and is the Guestrin Endowed Professor in Artificial Intelligence and Machine Learning. He received his B.S. from the University of Washington, his M.S. from Columbia University, and his Ph.D. In 2015 from the University of Wisconsin - Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc in the AMP lab at the University of California, Berkeley working with Benjamin Recht. Jamieson’s work has been recognized by an NSF CAREER award and Amazon Faculty Research award. Jamieson’s research explores how to leverage already-collected data to inform what future measurements to make next, in a closed loop.

Kelly W. Zhang,   January 12, 2023

Title: Inference after Adaptive Sampling for Longitudinal Data

Abstract: Online algorithms that learn to optimize treatments over time are increasingly used in a variety of digital intervention problems. These algorithms repeatedly update parameter estimates as data accrues; these parameter estimates are used to inform treatment decisions. These algorithms are called “adaptive sampling” algorithms and the resulting data is considered “adaptively collected.” In this work, we focus on data collected by a large class of adaptive sampling algorithms that are designed to optimize treatment decisions online using accruing data from multiple users. Combining or “pooling” data across users allows adaptive sampling algorithms to potentially learn faster. However, by pooling, these algorithms induce dependence between the collected user data trajectories; this makes statistical inference on this data-type especially challenging. We provide methods to perform a variety of statistical analyses on such adaptively collected data, including Z-estimation, off-policy analyses, and inferring excursion effects. This work is motivated by our work in designing experiments in which online reinforcement learning algorithms pool data across users to learn to optimize treatment decisions, yet reliable statistical inference is essential for conducting a variety of statistical analyses after the experiment is over.

Bio: Kelly W. Zhang is a final-year Ph.D. candidate in computer science at Harvard University advised by Susan Murphy and Lucas Janson. Her research focuses on addressing challenges faced when applying reinforcement learning algorithms to real-world problems. She has developed methods for statistical inference for data collected by bandit and reinforcement learning algorithms, i.e., adaptively collected data. She also works on developing the reinforcement learning algorithm to be used in Oralytics, a mobile health app aimed to help users develop healthy oral hygiene habits, in collaboration with Oral-B and researchers at UCLA and UMichigan. She is supported by an NSF Graduate Research Fellowship.