Skip to main content

Symmetry and Structure in Deep Reinforcement Learning

Speaker: Elise van der Pol

In this talk, I will discuss our work on symmetry and structure in reinforcement learning. In particular, I will discuss MDP Homomorphic Networks, a class of networks that ties transformations of observations to transformations of decisions. Such symmetries are ubiquitous in deep reinforcement learning, but often ignored in current approaches. Enforcing this prior knowledge into policy and value networks allows us to reduce the size of the solution space, a necessity in problems with large numbers of possible observations. I will showcase the benefits of our approach on agents in virtual environments. Building on the foundations of MDP Homomorphic Networks, I will also discuss our ongoing work on symmetries among multiple agents. This forms a basis for my vision for reinforcement learning for complex virtual environments, as well as for problems with intractable search spaces.

 

𝗛𝗼𝘄 𝗰𝗮𝗻 𝘆𝗼𝘂 𝗷𝗼𝗶𝗻?
Note that this is an in person event held at Lecture Theatre B in the computer science department.
Register here.
(Registration closes 2 hours before the beginning of the seminar).

Speaker bio

Elise van der Pol did her PhD in the Amsterdam Machine Learning Lab under Max Welling. Her research interests lie in structure, symmetry, and equivariance in reinforcement learning and machine learning. During her PhD, Elise spent time as a research scientist intern in DeepMind. She was an invited speaker at the self-supervision for reinforcement learning workshop at ICLR 2021 and co-organizer of the workshop on ecological/data-centric reinforcement learning at NeurIPS 2021. Before her PhD, she studied Artificial Intelligence at the University of Amsterdam, graduating on the topic of coordination in deep reinforcement​ ​learning. She was also involved in UvA's Inclusive AI.

 

 

Share this: