The Complexity of TensorFlow
Supervisors
Suitable for
Abstract
TensorFlow is an open-source library for machine learning released by Google in 2015. Though widely used for modelling
neural networks, TensorFlow actually allows for expressing any computation that can be represented as a data flow graph. In
computational complexity, such graphs have been studied for a long time in the context of arithmetic circuits. The combination
of techniques from circuit complexity theory as well as number theory have in recent years led to algorithms that allow for
evaluating arithmetic circuits more efficiently than standard methods. The goal of this project is to investigate whether
we can apply those techniques in order to efficiently evaluate (subclasses of) dataflow graphs present in TensorFlow. To this
end, we plan to develop a formal model of TensorFlow data flow graphs, and to analyse the computational complexity of evaluating
such graphs and computing gradients.
Prerequisites: Computational Complexity and a robust background in mathematics