Skip to main content

Backprop as Functor: a compositional perspective on supervised learning

David Spivak ( MIT )

Neural networks can be trained to perform functions, such as classifying images. The usual description of this process involves keywords like neural architecture, activation function, cost function, back propagation, training data, weights and biases, and weight-tying.
In this talk we will describe a symmetric monoidal category Learn, in which objects are sets and morphisms are roughly "functions that adapt to training data". The back propagation algorithm can then be viewed as a strong monoidal functor from a category of parameterized functions between Euclidean spaces to our category Learn.
This presentation is algebraic, not algorithmic; in particular it does not give immediate insight into improving the speed or accuracy of neural networks. The point of the talk is simply to articulate the various structures that one observes in this subject—including all the keywords mentioned above—and thereby obtain a categorical foothold for further study.

 

 

Share this: