ArXiv TLDR

Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key

🐦 Tweet
2605.06638

Tianle Wang, Zhaoyang Wang, Guangchen Lan, Xinpeng Wei, Sipeng Zhang + 2 more

cs.AIcs.CL

TLDR

ScaleLogic, a new framework, reveals RL-trained LLMs' compute scales with reasoning depth, with expressiveness impacting the scaling exponent.

Key contributions

  • Introduces ScaleLogic, a synthetic framework to study RL-trained LLMs' reasoning scaling.
  • Shows RL training compute follows a power law with reasoning depth ($T \propto D^γ$).
  • Finds scaling exponent $γ$ increases with logical expressiveness (1.04 to 2.60).
  • More expressive training yields better downstream performance and compute-efficient transfer.

Why it matters

Understanding how RL training scales with task difficulty is crucial for improving LLM reasoning. This work provides a systematic framework and key insights into the role of logical expressiveness, guiding more efficient and effective training strategies for long-horizon reasoning.

Original Abstract

Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute $T$ follows a power law with respect to reasoning depth $D$ ($T \propto D^γ$, $R^{2} > 0.99$), and that the scaling exponent $γ$ increases monotonically with logical expressiveness, from $1.04$ to $2.60$. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to $+10.66$ points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.