SWE-chat: Coding Agent Interactions From Real Users in the Wild
Joachim Baumann, Vishakh Padmakumar, Xiang Li, John Yang, Diyi Yang + 1 more
TLDR
SWE-chat dataset reveals real-world coding agent usage, showing inefficiencies, security risks, and user interaction patterns in developer workflows.
Key contributions
- Introduces SWE-chat, the first large-scale dataset of 6,000 real coding agent sessions from open-source developers.
- Identifies bimodal coding patterns: 41% agent-authored code ("vibe coding") vs. 23% human-authored.
- Finds agents are inefficient: only 44% of agent code is committed, and it introduces more security vulnerabilities.
- Reveals significant user pushback, with users correcting or interrupting agents in 44% of turns.
Why it matters
This paper provides crucial empirical data on how AI coding agents are actually used in the wild, moving beyond controlled benchmarks. Its findings highlight practical challenges like inefficiency and security risks, offering a foundation for improving agent design and integration into developer tools. This is vital for advancing real-world AI agent utility.
Original Abstract
AI coding agents are being adopted at scale, yet we lack empirical evidence on how people actually use them and how much of their output is useful in practice. We present SWE-chat, the first large-scale dataset of real coding agent sessions collected from open-source developers in the wild. The dataset currently contains 6,000 sessions, comprising more than 63,000 user prompts and 355,000 agent tool calls. SWE-chat is a living dataset; our collection pipeline automatically and continually discovers and processes sessions from public repositories. Leveraging SWE-chat, we provide an initial empirical characterization of real-world coding agent usage and failure modes. We find that coding patterns are bimodal: in 41% of sessions, agents author virtually all committed code ("vibe coding"), while in 23%, humans write all code themselves. Despite rapidly improving capabilities, coding agents remain inefficient in natural settings. Just 44% of all agent-produced code survives into user commits, and agent-written code introduces more security vulnerabilities than code authored by humans. Furthermore, users push back against agent outputs -- through corrections, failure reports, and interruptions -- in 44% of all turns. By capturing complete interaction traces with human vs. agent code authorship attribution, SWE-chat provides an empirical foundation for moving beyond curated benchmarks towards an evidence-based understanding of how AI agents perform in real developer workflows.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.