Offline Reinforcement Learning Workshop
Neural Information Processing Systems (NeurIPS)
December 12, 2020
@OfflineRL ยท #OFFLINERL2020
- open problems in offline RL
- theoretical understanding of offline (or batch RL) methods
- offline model selection, off-policy evaluation, and theoretical limitations
- properties of supervision needed to guarantee the success of offline RL methods
- practical applications of offline RL and how these guide novel problem formulations
- connections with related areas (e.g., causal inference, optimization, self-supervised learning, uncertainty)
- relationship and integration with the conventional online RL paradigms
- evaluation protocols, benchmarks, and datasets
For a more detailed summary of questions we are particularly excited about, please visit the front page. We welcome theoretical and/or empirical contributions. We are especially excited about work at the intersection of several of these questions.
We shall accept submissions of up to 8 pages of content (with no limit on references and supplementary material) that have not been previously accepted at an archival venue (such as ICML, NeurIPS, ICLR). While up to 8 pages of content is allowed, we strongly encourage authors to limit their submission to 4-6 pages, to ensure higher quality reviewer feedback. Papers can utilize any style file they want (for e.g., ICML, NeurIPS, ICLR) and should be anonymized for double blind review.
Evaluation Criteria: We plan to evaluate papers on clarity, potential impact, significance, and novelty of the proposed ideas. We are happy to accept works-in-progress to encourage discussion at the workshop. All accepted papers will be advertised on the workshop website in the form of a poster or recorded talk videos and camera-ready papers and are non-archival.
Day of workshop | December 12, 2020 |
Submission site opens | |
Submission site | |
Submission deadline | |
Reviewing starts | |
Reviews due | |
Decisions announced | |
Camera-ready and video submission due |