ArXiv TLDR

Combining Trained Models in Reinforcement Learning

🐦 Tweet
2605.02159

Ujjwal Patil, Javad Ghofrani

cs.LGcs.AIcs.NE

TLDR

A systematic review of DRL model reuse reveals patterns in transfer, ensemble, and federated learning, noting limitations in current empirical evidence.

Key contributions

  • Conducted a PRISMA-guided systematic review of 15 empirical studies on pretrained DRL model reuse.
  • Found positive results require substantial source-target similarity or explicit alignment mechanisms.
  • Noted that evidence for ensembles and federated aggregation is promising but currently sparse and narrow.
  • Highlighted the rarity of compute-matched comparisons, weakening claims of efficiency gains.

Why it matters

This paper provides a much-needed systematic synthesis of the fragmented literature on reusing trained DRL models. It clarifies empirical patterns and highlights critical methodological gaps, offering a clearer path for future research and benchmarking efforts in DRL transfer learning.

Original Abstract

Deep reinforcement learning (DRL) has delivered strong results in domains such as Atari and Go, but it still suffers from high sample cost and weak transfer beyond the training setting. A common response is to reuse information from previously trained models through transfer, distillation, ensemble methods, or federated training instead of learning each target task from random initialization. The literature on these mechanisms is fragmented, and published comparisons are hard to interpret because tasks, baselines, and compute budgets differ. This paper presents a PRISMA-guided systematic review of empirical studies on pretrained knowledge reuse in DRL. Starting from 589 records retrieved from IEEE Xplore, the ACM Digital Library, and citation tracing, we screened 570 unique records and assessed 89 full texts. After applying the final eligibility criteria, 15 empirical studies remained in the main synthesis. We analyzed them qualitatively across three factors: source-target similarity, diversity among reused models, and the fairness of comparisons against from-scratch baselines. Three patterns recur across the surviving corpus. First, positive results are concentrated in settings where source and target tasks share substantial structure or where the method includes an explicit gating or alignment mechanism. Second, evidence for ensembles and federated aggregation is promising but sparse and mostly limited to narrow settings. Third, compute-matched comparisons are rare, which weakens claims about efficiency gains over stronger single-agent baselines. The paper contributes a narrower and internally consistent review scope, a study-level synthesis of empirical evidence, and a provisional independence spectrum that should be treated as a hypothesis for future benchmarking rather than a validated metric.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.