ArXiv TLDR

TTVS: Boosting Self-Exploring Reinforcement Learning via Test-time Variational Synthesis

🐦 Tweet
2604.08468

Sikai Bai, Haoxi Li, Jie Zhang, Yongjiang Liu, Song Guo

cs.LGcs.AI

TLDR

TTVS boosts self-exploring reinforcement learning by dynamically synthesizing diverse training data from unlabeled test queries, improving LRM adaptation.

Key contributions

  • Dynamically augments training streams from unlabeled test queries for LRM self-evolution.
  • Online Variational Synthesis creates diverse, semantically-equivalent test query variations.
  • Test-time Hybrid Exploration balances accuracy exploitation with consistency-driven exploration.
  • Achieves superior performance, outperforming supervised RL with only unlabeled test data.

Why it matters

This paper tackles LRM adaptation where verifiable rewards are scarce. TTVS allows LRMs to learn underlying logic from dynamically synthesized unlabeled data, overcoming static learning limits. It significantly boosts performance, even surpassing supervised methods.

Original Abstract

Despite significant advances in Large Reasoning Models (LRMs) driven by reinforcement learning with verifiable rewards (RLVR), this paradigm is fundamentally limited in specialized or novel domains where such supervision is prohibitively expensive or unavailable, posing a key challenge for test-time adaptation. While existing test-time methods offer a potential solution, they are constrained by learning from static query sets, risking overfitting to textual patterns. To address this gap, we introduce Test-Time Variational Synthesis (TTVS), a novel framework that enables LRMs to self-evolve by dynamically augmenting the training stream from unlabeled test queries. TTVS comprises two synergistic modules: (1) Online Variational Synthesis, which transforms static test queries into a dynamic stream of diverse, semantically-equivalent variations, enforcing the model to learn underlying problem logic rather than superficial patterns; (2) Test-time Hybrid Exploration, which balances accuracy-driven exploitation with consistency-driven exploration across synthetic variants. Extensive experiments show TTVS yields superior performance across eight model architectures. Notably, using only unlabeled test-time data, TTVS not only surpasses other test-time adaptation methods but also outperforms state-of-the-art supervised RL-based techniques trained on vast, high-quality labeled data.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.