ArXiv TLDR

E3-TIR: Enhanced Experience Exploitation for Tool-Integrated Reasoning

🐦 Tweet
2604.09455

Weiyang Guo, Zesheng Shi, Liye Zhao, Jiayuan Ma, Zeen Zhu + 3 more

cs.AI

TLDR

E3-TIR enhances LLM tool-integrated reasoning by efficiently exploiting diverse experiences, achieving better performance with less data.

Key contributions

  • E3-TIR is a warm-up paradigm for LLM agent training in Tool-Integrated Reasoning.
  • Dynamically integrates Expert Prefixes, Expert Guided, and Self-Exploration experiences.
  • Employs branching exploration and mix policy optimization to resolve training conflicts.
  • Improves tool-use performance by 6% while using less than 10% of synthetic data.

Why it matters

This paper addresses critical limitations in current LLM training for tool-integrated reasoning, such as inefficient exploration and high data costs. E3-TIR offers a novel warm-up paradigm that significantly boosts performance while drastically reducing data requirements, making LLM tool-use more practical and scalable.

Original Abstract

While Large Language Models (LLMs) have demonstrated significant potential in Tool-Integrated Reasoning (TIR), existing training paradigms face significant limitations: Zero-RL suffers from inefficient exploration and mode degradation due to a lack of prior guidance, while SFT-then-RL is limited by high data costs and capability plateaus caused by low-entropy collapse. To address these challenges, we propose E3-TIR (Enhanced Experience Exploitation), a warm-up paradigm for the early stages of agent training. Specifically, we formulate training as the dynamic integration of three experience types: Expert Prefixes, Expert Guided, and Self-Exploration. By executing diverse branching exploration around expert "anchors" and employing a mix policy optimization mechanism, we effectively mitigate distribution shifts and resolve optimization conflicts arising from shared prefixes. Our method dynamically adapts the model's knowledge boundaries, effectively balancing exploration diversity with training efficiency.Experimental results demonstrate that E3-TIR achieves a 6 performance improvement over traditional paradigms on tool-use tasks, while requiring less than 10 of the synthetic data. Furthermore, in terms of ROI, a comprehensive metric integrating performance, data cost, and training efficiency we achieve a 1.46x gain compared to baselines. Code is available at https://github.com/yuki-younai/E3-TIR.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.