Android Coach: Improve Online Agentic Training Efficiency with Single State Multiple Actions
Guo Gan, Yuxuan Ding, Cong Chen, Yuwei Ren, Yin Huang + 1 more
TLDR
Android Coach boosts online Android agent training efficiency by enabling multiple actions per state, overcoming single-action limitations.
Key contributions
- Proposes Android Coach, a "Single State Multiple Actions" paradigm for online RL.
- Utilizes a learned critic to estimate action values, avoiding extra emulator overhead.
- Integrates a process reward model and group-wise advantage estimator for reliable coaching.
- Achieves 1.4x higher training efficiency and improved success rates on Android benchmarks.
Why it matters
Training online Android agents is costly and slow. Android Coach significantly improves efficiency and success rates by exploring multiple actions per state. This makes online reinforcement learning for complex agentic tasks more practical and scalable.
Original Abstract
Online reinforcement learning (RL) serves as an effective method for enhancing the capabilities of Android agents. However, guiding agents to learn through online interaction is prohibitively expensive due to the high latency of emulators and the sample inefficiency of existing RL algorithms. We identify a fundamental limitation in current approaches: the Single State Single Action paradigm, which updates the policy with one-to-one state-action pairs from online one-way rollouts without fully exploring each costly emulator state. In this paper, we propose Android Coach, a novel framework that shifts the training paradigm to Single State Multiple Actions, allowing the agent to sample and utilize multiple actions for a single online state. We enable this without additional emulator overhead by learning a critic that estimates action values. To ensure the critic serves as a reliable coach, we integrate a process reward model and introduce a group-wise advantage estimator based on the averaged critic outputs. Extensive experiments demonstrate the effectiveness and efficiency of Android Coach: it achieves 7.5% and 8.3% success rate improvements on AndroidLab and AndroidWorld over UI-TARS-1.5-7B, and attains 1.4x higher training efficiency than Single State Single Action methods PPO and GRPO at matched success rates.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.