ArXiv TLDR

SWE-WebDevBench: Evaluating Coding Agent Application Platforms as Virtual Software Agencies

🐦 Tweet
2605.04637

Siddhant Saxena, Nilesh Trivedi, Vinayaka Jyothi

cs.MAcs.SE

TLDR

SWE-WebDevBench introduces a 68-metric framework to evaluate AI coding platforms as virtual agencies, revealing critical flaws in current-gen tools.

Key contributions

  • Introduces SWE-WebDevBench, a 68-metric framework for evaluating AI coding platforms as virtual software agencies.
  • Identifies a "specification bottleneck" where platforms oversimplify complex business requirements.
  • Reveals pervasive frontend-backend decoupling and poor production readiness (none above 60% engineering quality).
  • Highlights widespread security and infrastructure failures, with low security scores and concurrency handling.

Why it matters

Current AI coding platforms struggle with real-world application development. This paper provides a comprehensive benchmark to identify and address critical gaps in business understanding, engineering quality, and security, guiding future platform improvements.

Original Abstract

The emergence of "vibe coding" platforms, where users describe applications in natural language and AI agents autonomously generate full-stack software, has created a need for rigorous evaluation beyond code-level benchmarks. In order to assess them as virtual software development agencies on understanding business requirements, making architectural decisions, writing production code, handling iterative modifications, and maintaining business readiness, we introduce SWE-WebDev Bench, a 68-metric evaluation framework spanning 25 primary and 43 diagnostic metrics across seven groups, organized along three dimensions: Interaction Mode (App Creation Request (ACR) vs. App Modification Request (AMR)), Agency Angle (Product Manager (PM), Engineering, Ops), and Complexity Tier (T4 multi-role SaaS, T5 AI-native). Our evaluation (six platforms, three domains, 18 evaluation cells) reveals four recurring shortcomings in the current generation of AI app builders: (1) A specification bottleneck, where platforms compress rich business requirements into oversimplified technical plans, (2) A pervasive frontend-backend decoupling, where visually polished UIs mask absent or broken backend infrastructure, (3) A steep production-readiness cliff, where no platform scores above 60% on engineering quality and post-generation human effort varies substantially across platforms and (4) Widespread security and infrastructure failures, with no platform exceeding 65% Security Score against a 90% target and concurrency handling as low as 6%. These observations are descriptive of our sample and require larger-scale replication to establish generality. We release SWE-WebDev Bench as a community benchmark to enable such replication and help platform builders identify and address these gaps. Code and benchmark resources are available at: https://github.com/snowmountainAi/webdevbench and https://webdevbench.com/.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.