Call for Papers
Important Dates
- Submission system opens: April 20th 2022 11:59 PM (AoE time) Submission page
- Submission deadline: June 3rd 2022 11:59 PM (AoE time)
- Author notification: June 13th 2022 11:59 PM (AoE time)
- Lightning Talk deadline (spotlight talks): TBA
- Camera ready date: TBA
- Workshop day: TBA
The Call
Whether in robotics, protein design, or physical sciences, one often faces decisions regarding which data to collect or which experiments to perform. There is thus a pressing need for algorithms and sampling strategies that make intelligent decisions about data collection processes that allow for data-efficient learning. Experimental design and active learning have been major research focuses within machine learning and statistics, aiming to answer both theoretical and algorithmic aspects of efficient data collection processes. The goal of this workshop is to identify missing links that hinder the direct application of these principled research ideas into practically relevant solutions. Progress in this area can provide immense benefits in using experimental design and active learning algorithms in emerging high-impact applications, such as materials design, computational biology, causal discovery, drug design, citizen science, etc.
We welcome submissions of 4-6 pages (excluding references) in the following (modified) JMLR Workshop and Proceedings format. An appendix of any length is allowed after references. Submissions should be non-anonymous. All accepted papers will be presented as posters (recently published or under-review work is also welcome). There will be no archival proceedings, however, the accepted papers will be made available online on the workshop website. Papers should be submitted via OpenReview.
Technical topics of interest include (but are not limited to):
- Large-scale and real-world experimental design (e.g. drug design, physics, robotics, material design, protein design, causal discovery)
- Efficient active learning and exploration
- High-dimensional, scalable Bayesian and bandit optimization (e.g. contextual, multi-task)
- Sample-efficient interactive learning, hypothesis and A/B testing
- Corrupted or indirect measurements, multi-fidelity experimentation
- Domain-knowledge integration (e.g. from physics, chemistry, biology, etc.)
- Safety and robustness during experimentation and of resulting designs
- Experiment design/active learning in Reinforcement Learning
Best Paper award
We will be awarding a best student paper award, worth 1000 USD, to the best paper selected by a reviewing committee