ArXiv TLDR

LLM-Based Automated Diagnosis Of Integration Test Failures At Google

🐦 Tweet
2604.12108

Celal Ziftci, Ray Liu, Spencer Greene, Livio Dalloro

cs.SEcs.AI

TLDR

Auto-Diagnose uses LLMs to efficiently diagnose integration test failures at Google, achieving high accuracy and positive developer feedback.

Key contributions

  • Leverages LLMs to analyze complex integration test failure logs and provide concise summaries.
  • Achieved 90.14% accuracy in diagnosing root causes across 71 real-world failures.
  • Integrated into Google's Critique code review system, offering in-time assistance.
  • Deployed Google-wide, used on over 52,000 failing tests with strong positive user reception.

Why it matters

This paper introduces Auto-Diagnose, an LLM-powered tool that significantly improves the efficiency of diagnosing complex integration test failures. Its high accuracy and seamless integration into developer workflows at Google demonstrate the practical value of AI in critical software development tasks.

Original Abstract

Integration testing is critical for the quality and reliability of complex software systems. However, diagnosing their failures presents significant challenges due to the massive volume, unstructured nature, and heterogeneity of logs they generate. These result in a high cognitive load, low signal-to-noise ratio, and make diagnosis difficult and time-consuming. Developers complain about these difficulties consistently and report spending substantially more time diagnosing integration test failures compared to unit test failures. To address these shortcomings, we introduce Auto-Diagnose, a novel diagnosis tool that leverages LLMs to help developers efficiently determine the root cause of integration test failures. Auto-Diagnose analyzes failure logs, produces concise summaries with the most relevant log lines, and is integrated into Critique, Google's internal code review system, providing contextual and in-time assistance. Based on our case studies, Auto-Diagnose is highly effective. A manual evaluation conducted on 71 real-world failures demonstrated 90.14% accuracy in diagnosing the root cause. Following its Google-wide deployment, Auto-Diagnose was used across 52, 635 distinct failing tests. User feedback indicated that the tool was deemed "Not helpful" in only 5.8% of cases, and it was ranked #14 in helpfulness among 370 tools that post findings in Critique. Finally, user interviews confirmed the perceived usefulness of Auto-Diagnose and positive reception of integrating automatic diagnostic assistance into existing workflows. We conclude that LLMs are highly successful in diagnosing integration test failures due to their capacity to process and summarize complex textual data. Integrating such AI-powered tooling automatically into developers' daily workflows is perceived positively, with the tool's accuracy remaining a critical factor in shaping developer perception and adoption.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.