Skip to main content

Differentiable Programming in High−Energy Physics

Atılım Güneş Baydin‚ Kyle Cranmer‚ Matthew Feickert‚ Lindsey Gray‚ Lukas Heinrich‚ Alexander Held‚ Andrew Melo‚ Mark Neubauer‚ Jannicke Pearkes‚ Nathan Simpson‚ Nick Smith‚ Giordon Stark‚ Savannah Thais‚ Vassil Vassilev and Gordon Watts

Abstract

A key component to the success of deep learning is the use of gradient-based optimization. Deep learning practitioners compose a variety of modules together to build a complex computational pipeline that may depend on millions or billions of parameters. Differentiating such functions is enabled through a computational technique known as automatic differentiation. The success of deep learning has led to an abstraction known as differentiable programming, which is being promoted to a first-class citizen in many programming languages and data analysis frameworks. This often involves replacing some common non-differentiable operations (eg. binning, sorting) with relaxed, differentiable analogues. The result is a system that can be optimized from end-to-end using efficient gradient-based optimization algorithms. A differentiable analysis could be optimized in this way—basic cuts to final fits all taking into account full systematic errors and automatically analyzed. This Snowmass LOI outlines the potential advantages and challenges of adopting a differentiable programming paradigm in high-energy physics.

Book Title
Snowmass 2021 Letters of Interest (LOI)‚ Division of Particles and Fields (DPF)‚ American Physical Society
Year
2020