ArXiv TLDR

Beyond Single-Model Optimization: Preserving Plasticity in Continual Reinforcement Learning

🐦 Tweet
2604.15414

Lute Lillo, Nick Cheney

cs.LGcs.AIcs.NE

TLDR

TeLAPA is a continual RL framework preserving plasticity by archiving diverse policy neighborhoods, enabling faster adaptation and better performance.

Key contributions

  • Introduces TeLAPA, a continual RL framework using policy archives and a shared latent space.
  • Organizes behaviorally diverse policy neighborhoods, shifting from single-model preservation.
  • Achieves faster competence recovery and higher performance across task sequences.
  • Demonstrates that effective reuse requires multiple policy alternatives, not just one representative.

Why it matters

This paper addresses a critical limitation in continual RL: the loss of plasticity from single-model preservation. By introducing TeLAPA, it offers a novel framework that maintains diverse policy neighborhoods, enabling more robust adaptation and significantly improving performance in lifelong learning scenarios.

Original Abstract

Continual reinforcement learning must balance retention with adaptation, yet many methods still rely on \emph{single-model preservation}, committing to one evolving policy as the main reusable solution across tasks. Even when a previously successful policy is retained, it may no longer provide a reliable starting point for rapid adaptation after interference, reflecting a form of \emph{loss of plasticity} that single-policy preservation cannot address. Inspired by quality-diversity methods, we introduce \textsc{TeLAPA} (Transfer-Enabled Latent-Aligned Policy Archives), a continual RL framework that organizes behaviorally diverse policy neighborhoods into per-task archives and maintains a shared latent space so that archived policies remain comparable and reusable under non-stationary drift. This perspective shifts continual RL from retaining isolated solutions to maintaining \emph{skill-aligned neighborhoods} with competent and behaviorally related policies that support future relearning. In our MiniGrid CL setting, \textsc{TeLAPA} learns more tasks successfully, recovers competence faster on revisited tasks after interference, and retains higher performance across a sequence of tasks. Our analyses show that source-optimal policies are often not transfer-optimal, even within a local competent neighborhood, and that effective reuse depends on retaining and selecting among multiple nearby alternatives rather than collapsing them to one representative. Together, these results reframe continual RL around reusable and competent policy neighborhoods, providing a route beyond single-model preservation toward more plastic lifelong agents.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.