Skip to main content

Unifying task specification in reinforcement learning

Martha White ( University of Alberta )
Markov decision processes have long been the standard formalism for sequential decision-making within reinforcement learning. But, this is not the full story; in reality, there are specialized instances that require separate treatment, with the most notable being episodic and continuing problems. In this talk, I will discuss a generalization to the discount that enables a more unified formalism for these settings. I will discuss some advantages of this generalization, for specifying a broader class of policy evaluation questions and in terms of unifying the theoretical treatment of these different settings.

Speaker bio

Martha White is an assistant professor of Computing Science at the University of Alberta. Previously she was an assistant professor in the School of Informatics and Computing at Indiana University in Bloomington, and received her PhD in Computing Science from the University of Alberta in 2015. Her primary research goal is to develop algorithms for autonomous agents learning from streams of data. She focuses on developing practical algorithms for reinforcement learning and representation learning.

 

 

Share this: