Skip to main content

Algorithm-Hardware Co-Design for Energy-Efficient Neural Network Inference

Paul Whatmough ( Arm ML Research Lab Boston / Harvard University )

Deep neural networks (DNNs) have quickly become an essential workload across computing form factors, including IoT, mobile, automotive and datacenter. However, DNN inference demands an enormous number of arithmetic operations and a large memory footprint. In this talk, we will explore the co-design of DNN models and hardware to achieve state-of-the-art performance for real-time, energy-constrained inference applications.

Speaker bio

Paul Whatmough received his Doctorate degree from University College London, UK. He was previously with Philips, NXP, Arm, and Harvard, working in the areas of ML, DSP, wireless, accelerators, and circuits. Currently, he leads research at Arm ML Research Lab Boston, and is an Associate at Harvard.

 

 

Share this: