ArXiv TLDR

ADEMA: A Knowledge-State Orchestration Architecture for Long-Horizon Knowledge Synthesis with LLMAgents

🐦 Tweet
2604.25849

Zhou Hanlin, Chan Huah Yong

cs.AI

TLDR

ADEMA is a knowledge-state orchestration architecture for LLM agents, designed to prevent knowledge drift and ensure continuity in long-horizon synthesis tasks.

Key contributions

  • Prevents knowledge state drift and fractured evidence chains in long-horizon LLM tasks.
  • Employs explicit epistemic bookkeeping and checkpoint-resumable persistence for robustness.
  • Integrates dual-evaluator governance and adaptive task-mode switching for trajectory discipline.
  • Uses segment-level memory condensation and artifact-first assembly for efficient synthesis.

Why it matters

This paper introduces a novel architecture that tackles critical failure points in long-horizon LLM tasks, such as knowledge drift and fractured evidence. By orchestrating explicit knowledge states and ensuring recoverable continuity, ADEMA significantly enhances the reliability and consistency of LLM agents. It provides a robust framework for developing more capable AI systems.

Original Abstract

Long-horizon LLM tasks often fail not because a single answer is unattainable, but because knowledge states drift across rounds, intermediate commitments remain implicit, and interruption fractures the evolving evidence chain. This paper presents ADEMA as a knowledge-state orchestration architecture for long-horizon knowledge synthesis rather than as a generic multi-agent runtime. The architecture combines explicit epistemic bookkeeping, heterogeneous dual-evaluator governance, adaptive task-mode switching, reputation-shaped resource allocation, checkpoint-resumable persistence, segment-level memory condensation, artifact-first assembly, and final-validity checking with safe fallback. Evidence is drawn entirely from existing materials: a four-scenario showcase package, a fixed 60-run mechanism matrix, targeted micro-ablation and artifact-chain supplements, and a repaired protocol-level benchmark in which code-oriented evaluation is the clearest quality-sensitive mechanism block. Across the fixed matrix, removing checkpoint/resume produced the only invalid run, and it did so in the interruption-sensitive resume condition. By contrast, dual evaluation, segment synthesis, and dynamic governance are best interpreted as supporting control mechanisms that shape trajectory discipline, explicit artifact progression, and cost-quality behavior rather than as universal binary prerequisites for completion. The contribution is therefore a knowledge-state orchestration architecture in which explicit epistemic state transition, evidence-bearing artifact progression, and recoverable continuity are the primary design commitments.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.