ArXiv TLDR

Synthetic Computers at Scale for Long-Horizon Productivity Simulation

🐦 Tweet
2604.28181

Tao Ge, Baolin Peng, Hao Cheng, Jianfeng Gao

cs.AIcs.CLcs.LG

TLDR

This paper introduces Synthetic Computers at Scale, a method to create realistic virtual computer environments for long-horizon productivity simulations, improving agent performance.

Key contributions

  • Creates "Synthetic Computers at Scale" for realistic virtual environments with rich content and folder structures.
  • Simulates long-horizon productivity tasks (e.g., month-long projects) using two interacting agents.
  • Generates experiential learning data that significantly boosts agent performance on productivity evaluations.
  • Shows scalability with 1,000 synthetic computers, enabling diverse, large-scale agent training.

Why it matters

This paper offers a scalable method for training AI agents in realistic, long-horizon productivity tasks. It creates diverse synthetic computer environments, providing a crucial substrate for agent self-improvement and reinforcement learning, vital for developing capable AI in real-world work settings.

Original Abstract

Realistic long-horizon productivity work is strongly conditioned on user-specific computer environments, where much of the work context is stored and organized through directory structures and content-rich artifacts. To scale synthetic data creation for such productivity scenarios, we introduce Synthetic Computers at Scale, a scalable methodology for creating such environments with realistic folder hierarchies and content-rich artifacts (e.g., documents, spreadsheets, and presentations). Conditioned on each synthetic computer, we run long-horizon simulations: one agent creates productivity objectives that are specific to the computer's user and require multiple professional deliverables and about a month of human work; another agent then acts as that user and keeps working across the computer -- for example, navigating the filesystem for grounding, coordinating with simulated collaborators, and producing professional artifacts -- until these objectives are completed. In preliminary experiments, we create 1,000 synthetic computers and run long-horizon simulations on them; each run requires over 8 hours of agent runtime and spans more than 2,000 turns on average. These simulations produce rich experiential learning signals, whose effectiveness is validated by significant improvements in agent performance on both in-domain and out-of-domain productivity evaluations. Given that personas are abundant at billion scale, this methodology can in principle scale to millions or even billions of synthetic user worlds with sufficient compute, enabling broader coverage of diverse professions, roles, contexts, environments, and productivity needs. We argue that scalable synthetic computer creation, together with at-scale simulations, is highly promising as a foundational substrate for agent self-improvement and agentic reinforcement learning in long-horizon productivity scenarios.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.