Skip to main content

End−to−end Training of Differentiable Pipelines Across Machine Learning Frameworks

Mitar Milutinovic‚ Atılım Güneş Baydin‚ Robert Zinkov‚ William Harvey‚ Dawn Song‚ Frank Wood and Wade Shen

Abstract

In this work we present a unified interface and methodology for performing end-to-end gradient-based refinement of pipelines of differentiable machine-learning primitives. This is distinguished from recent interoperability efforts such as the Open Neural Network Exchange (ONNX) format and other language-centric cross-compilation approaches in that the final pipeline does not need to be implemented nor trained in the same language nor cross-compiled into any single language; in other words, primitives may be written and pre-trained in PyTorch, TensorFlow, Caffe, scikit-learn or any of the other popular machine learning frameworks and fine-tuned end-to-end while being executed directly in their host frameworks. Provided primitives expose our proposed interface, it is possible to automatically compose all such primitives and refine them based on an end-to-end loss.

Book Title
Neural Information Processing Systems (NIPS) 2017 Autodiff Workshop: The Future of Gradient−based Machine Learning Software and Techniques‚ Long Beach‚ CA‚ US‚ December 9‚ 2017
Year
2017