Skip to main content

Gradients in Games

David Balduzzi ( DeepMind )

Algorithms that optimize multiple objective functions have proliferated recently — including generative adversarial networks (GANs), synthetic gradients, intrinsic-curiosity, and others.  More generally, there’s a shift from end-to-end learning on a single loss, towards modular architectures composed of sub-goals and sub-losses. However, very little is understood about these settings, where there’s no longer a loss landscape, and gradient descent doesn’t necessarily descend. In this talk, I will discuss the general setting, recent work on the geometry of interacting losses, and implications for learning.

 

 

Share this: