ArXiv TLDR

OpenMobile: Building Open Mobile Agents with Task and Trajectory Synthesis

🐦 Tweet
2604.15093

Kanzhi Cheng, Zehao Li, Zheng Ma, Nuo Chen, Jialin Cao + 9 more

cs.AIcs.CLcs.CVcs.HC

TLDR

OpenMobile is an open-source framework for mobile agents, synthesizing high-quality task instructions and trajectories to achieve competitive results.

Key contributions

  • Introduces a scalable task synthesis pipeline using global environment memory for diverse, grounded instructions.
  • Develops a policy-switching strategy for trajectory rollout, capturing essential error-recovery data.
  • Achieves competitive results on AndroidWorld (64.7% with Qwen3-VL), outperforming existing open-data methods.

Why it matters

This paper addresses the data opacity issue in mobile agent training by providing an open-source framework for synthesizing high-quality task data. It enables broader research and development in mobile agents by making crucial training recipes and data publicly available.

Original Abstract

Mobile agents powered by vision-language models have demonstrated impressive capabilities in automating mobile tasks, with recent leading models achieving a marked performance leap, e.g., nearly 70% success on AndroidWorld. However, these systems keep their training data closed and remain opaque about their task and trajectory synthesis recipes. We present OpenMobile, an open-source framework that synthesizes high-quality task instructions and agent trajectories, with two key components: (1) The first is a scalable task synthesis pipeline that constructs a global environment memory from exploration, then leverages it to generate diverse and grounded instructions. and (2) a policy-switching strategy for trajectory rollout. By alternating between learner and expert models, it captures essential error-recovery data often missing in standard imitation learning. Agents trained on our data achieve competitive results across three dynamic mobile agent benchmarks: notably, our fine-tuned Qwen2.5-VL and Qwen3-VL reach 51.7% and 64.7% on AndroidWorld, far surpassing existing open-data approaches. Furthermore, we conduct transparent analyses on the overlap between our synthetic instructions and benchmark test sets, and verify that performance gains stem from broad functionality coverage rather than benchmark overfitting. We release data and code at https://njucckevin.github.io/openmobile/ to bridge the data gap and facilitate broader mobile agent research.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.