Memory in AI Agents: Internal and External Memory Mechanisms in LLM-Based Systems
Supervisors
Suitable for
Abstract
Prerequisites: Machine learning fundamentals; familiarity with Python. Interest in
LLMs, agents, or reinforcement
learning is helpful but not required.
Background
● AI agents based on Large Language Models (LLMs) are increasingly deployed in settings that
require
remembering past interactions, long-term goals, and external information. Unlike
traditional models, these agents often
combine internal memory (implicit storage within model
parameters and activations) with external memory (such as retrieval
systems, tool-based memory,
or persistent state). Understanding how these different forms of memory interact is critical
for
building reliable, interpretable, and scalable AI agents. This project explores memory mechanisms
from a systems
and behavioral perspective, focusing on how LLM-based agents store, retrieve,
and use information over time.
Focus
● The project investigates memory in LLM-based AI agents, with emphasis on the interaction
between internal
and external memory. The central research questions are: What roles do
different memory mechanisms play in agent behavior?
How do internal representations and
external memory systems jointly support long-horizon reasoning and decision-making?
● The expected contribution is a clearer conceptual and empirical understanding of memory usage
in modern AI agents.
Method
The project will build on existing literature on LLM-based agents, memory-augmented neural networks,
and
retrieval-augmented systems (RAG). Students will analyze agent behavior in controlled
environments, comparing performance
across tasks that stress different memory demands. Potential
starting points include agent frameworks that integrate
LLMs with tools or retrieval modules, and research
on long-term memory in sequential decision-making.
[1] Lewis, Patrick, et al. "Retrieval-augmented generation for knowledge-intensive nlp tasks." Advances in neural
information
processing systems 33 (2020): 9459-9474.
Goals:
● Essential: Survey literature on memory in AI agents, including internal and external memory
mechanisms.
● Essential: Empirically study how LLM-based agents use memory across multi-step or
long-horizon tasks.
● Stretch:
Analyze how variations in memory access or structure affect agent reliability, reasoning,
or adaptation.