Skip to main content

A Survey on Neuro−mimetic Deep Learning via Predictive Coding

Tommaso Salvatori‚ Ankur Mali‚ Christopher L. Buckley‚ Thomas Lukasiewicz‚ Rajesh P. N. Rao‚ Karl Friston and Alexander Ororbia

Abstract

Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century. The majority of results in AI thus far have been achieved using deep neural networks trained with the error backpropagation learning algorithm. However, such an algorithm has always been considered biologically implausible. To this end, recent works have studied learning algorithms for deep neural networks inspired by the neurosciences. One such theory, called predictive coding (PC), has shown promising properties that make it potentially valuable for the machine learning community: it can model information processing in different areas of the brain, can be used in control and robotics, has a solid mathematical foundation in variational inference, and performs its computations asynchronously. Inspired by such properties, works that propose novel PC-like algorithms are starting to be present in multiple sub-fields of machine learning and artificial intelligence at large. Here, we survey such efforts by first providing a broad overview of the history of PC to provide common ground for the understanding of the recent developments, then by surveying current efforts and results, and concluding with a large discussion of possible implications and ways forward.

Journal
Neural Networks
Note
In press.
Year
2025