ArXiv TLDR

StructMem: Structured Memory for Long-Horizon Behavior in LLMs

🐦 Tweet
2604.21748

Buqiang Xu, Yijun Chen, Jizhan Fang, Ruobin Zhong, Yunzhi Yao + 3 more

cs.CLcs.AIcs.IRcs.LGcs.MA

TLDR

StructMem offers a structured, hierarchical memory for LLMs, enhancing long-horizon behavior and reasoning while reducing resource usage.

Key contributions

  • Introduces StructMem, a hierarchical memory framework for LLMs to model event relationships.
  • Significantly improves temporal reasoning and multi-hop QA performance on LoCoMo.
  • Substantially reduces token usage, API calls, and runtime compared to prior memory systems.
  • Achieves efficiency through temporal anchoring and periodic semantic consolidation.

Why it matters

Current LLM memory systems struggle with capturing complex event relationships efficiently. StructMem provides a novel, structured approach that significantly improves reasoning capabilities for long-horizon tasks while drastically cutting operational costs. This advances the development of more capable and sustainable conversational AI.

Original Abstract

Long-term conversational agents need memory systems that capture relationships between events, not merely isolated facts, to support temporal reasoning and multi-hop question answering. Current approaches face a fundamental trade-off: flat memory is efficient but fails to model relational structure, while graph-based memory enables structured reasoning at the cost of expensive and fragile construction. To address these issues, we propose \textbf{StructMem}, a structure-enriched hierarchical memory framework that preserves event-level bindings and induces cross-event connections. By temporally anchoring dual perspectives and performing periodic semantic consolidation, StructMem improves temporal reasoning and multi-hop performance on \texttt{LoCoMo}, while substantially reducing token usage, API calls, and runtime compared to prior memory systems, see https://github.com/zjunlp/LightMem .

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.