This year’s workshop is the inaugural edition of SPiRL.
Connections to Prior Workshops
The concept of structure and priors in reinforcement learning follows from a number of related workshops that have been successful at past conferences such as NeurIPS and ICML. In particular, besides workshops on recent advances in reinforcement learning methods (such as the Workshop on “Deep Reinforcement Learning” at NeurIPS 2016–2018), we take inspiration from the following workshops:
- ICML 2016 Workshop on “Abstraction in Reinforcement Learning” (designing and learning abstractions);
- NeurIPS 2017 Workshop on “Hierarchical Reinforcement Learning” (learning hierarchically structured action and state spaces);
- LLARLA at ICML 2017–2018 (lifelong transfer and meta-learning in reinforcement learning);
- AutoML at ICML 2017–2018 (meta-learning in domains including reinforcement learning);
- MetaLearn at NeurIPS 2017–2018 (meta-learning in domains including reinforcement learning);
- NAMPI at NeurIPS 2016 and ICML 2018 (program induction, which involves structure and modularity in domains including reinforcement learning).
In contrast to these workshops, rather than solely exploring different methods for learning and designing structure and priors in reinforcement learning, we intend this workshop to be a cross-disciplinary discussion of the utility, necessity, and formulation of structure and priors.