Continual Knowledge Updating in LLM Systems: Learning Through Multi-Timescale Memory Dynamics
Andreas Pattichis, Constantine Dovrolis
TLDR
This paper introduces Memini, an LLM system that uses bio-inspired multi-timescale memory dynamics for continual knowledge updating.
Key contributions
- Proposes bio-inspired multi-timescale memory for continual LLM knowledge updating.
- Introduces Memini, an associative memory organized as a directed graph.
- Uses coupled fast/slow variables (Benna-Fusi model) for dynamic knowledge management.
- Enables episodic sensitivity, gradual consolidation, and selective forgetting.
Why it matters
This work addresses a critical limitation of LLMs by enabling them to continually learn and adapt to new information dynamically. By reframing external memory as a self-organizing learning substrate, it offers a path for LLMs to stay current without costly retraining, mimicking biological memory processes.
Original Abstract
LLMs are trained once, then deployed into a world that never stops changing. External memory compensates for this, but most systems manage it explicitly rather than letting it adapt on its own. Biological memory works differently: coupled multi-timescale dynamics make new associations immediately usable, strengthen what repetition confirms, and let the rest fade. We argue that external memory should follow a similar principle. In Memini, this view takes the form of an associative memory that organizes knowledge as a directed graph. Each edge carries two coupled internal variables, one fast and one slow, following the Benna-Fusi model of synaptic consolidation. From this coupling, episodic sensitivity, gradual consolidation, and selective forgetting emerge as facets of a single mechanism, reframing external memory as a learning substrate that reorganizes through its own dynamics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.