ArXiv TLDR

OCR-Memory: Optical Context Retrieval for Long-Horizon Agent Memory

🐦 Tweet
2604.26622

Jinze Li, Yang Zhang, Xin Yang, Jiayi Qu, Jinfeng Xu + 3 more

cs.CL

TLDR

OCR-Memory enables LLM agents to retain long-term experience by encoding historical trajectories visually, overcoming text-context limits and reducing hallucination.

Key contributions

  • Uses visual modality to represent agent experience, overcoming text-context budget limits.
  • Renders historical trajectories into images annotated with unique visual identifiers.
  • Retrieves experience via a "locate-and-transcribe" paradigm for verbatim text.
  • Avoids free-form generation, significantly reducing hallucination in LLM agents.

Why it matters

LLM agents often struggle with long-term memory due to text-context limitations, leading to information loss and fragmented evidence. OCR-Memory introduces a novel approach by leveraging visual encoding of agent histories. This significantly increases effective memory capacity and ensures faithful evidence recovery, vital for complex, long-horizon autonomous tasks.

Original Abstract

Autonomous LLM agents increasingly operate in long-horizon, interactive settings where success depends on reusing experience accumulated over extended histories. However, existing agent memory systems are fundamentally constrained by text-context budgets: storing or revisiting raw trajectories is prohibitively token-expensive, while summarization and text-only retrieval trade token savings for information loss and fragmented evidence. To address this limitation, we propose Optical Context Retrieval Memory (OCR-Memory), a memory framework that leverages the visual modality as a high-density representation of agent experience, enabling retention of arbitrarily long histories with minimal prompt overhead at retrieval time. Specifically, OCR-Memory renders historical trajectories into images annotated with unique visual identifiers. OCR-Memory retrieves stored experience via a \emph{locate-and-transcribe} paradigm that selects relevant regions through visual anchors and retrieves the corresponding verbatim text, avoiding free-form generation and reducing hallucination. Experiments on long-horizon agent benchmarks show consistent gains under strict context limits, demonstrating that optical encoding increases effective memory capacity while preserving faithful evidence recovery.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.