Skip to main content

Associative Memories via Predictive Coding

Tommaso Salvatori‚ Yuhang Song‚ Yujian Hong‚ Simon Frieder‚ Lei Sha‚ Zhenghua Xu‚ Rafal Bogacz and Thomas Lukasiewicz


Associative memories in the brain receive and store patterns of activity registered by the sensory neurons, and are able to retrieve them when necessary. Due to their importance in human intelligence, computational models of associative memories have been developed for several decades now. They include autoassociative memories, which allow for storing data points and retrieving a stored data point s when provided with a noisy or partial variant of s. Such associative memories also play an important role in deep learning, as it has been shown that overparametrized neural networks trained with backpropagation are able to memorize data points. In this paper, we present a novel neural model for realizing (auto)associative memories, based on a hierarchical generative model that receives external stimuli via sensory neurons. This model is trained using predictive coding, an error-based learning algorithm inspired by learning in the cortex. To test the capabilities of this model, we perform multiple retrieval experiments from both corrupted and partial data points. In an extensive comparison, we show that this new model outperforms (in storage capacity and in retrieval accuracy and robustness) state-of-the-art associative memories (including especially autoencoders trained via backpropagation) by a large margin. In particular, in completing partial data points, our model achieves remarkable results on natural image datasets, such as ImageNet, with a surprisingly high accuracy, even when presented with only atiny fraction of pixels of the original images.

Book Title
Proceedings of the 35th Annual Conference on Neural Information Processing Systems‚ NeurIPS 2021