Bug-Report-Driven Fault Localization: Industrial Benchmarking and Lesson Learned at ABB Robotics
Pernilla Hall, Anton Ununger, Riccardo Rubei, Alessio Bucaioni
TLDR
AI-driven fault localization using only bug report text aids industrial debugging without code or runtime data.
Key contributions
- Framed fault localization as supervised text classification using bug report text only.
- Evaluated traditional ML and transformer models on five years of ABB Robotics bug reports.
- Traditional models outperformed transformers; data augmentation boosted Random Forest results.
- Approach requires no source code or execution data, fitting industrial maintenance workflows.
Why it matters
This paper shows AI can localize faults using just bug report text, simplifying industrial debugging. It challenges the dominance of transformers in domain-specific tasks and offers a scalable, low-cost tool for software maintenance.
Original Abstract
Software quality assurance remains a major challenge in industrial environments, where large-scale and long-lived systems inevitably accumulate defects. Identifying the location of a fault is often time-consuming and costly, particularly during maintenance phases when developers must rely primarily on textual bug reports rather than complete runtime or code-level context. In this study, we investigated if artificial intelligence can support fault localization using only the natural-language content of bug reports. By relying only on textual information, our approach requires no access to source code, execution traces, or static analysis artifacts, making it directly deployable within existing industrial maintenance workflows. We framed fault localization as a supervised text classification problem and evaluated three traditional machine learning models (Logistic Regression, Support Vector Machine, and Random Forest) and two fine-tuned transformer-based language models (RoBERTa-Base and Distil-RoBERTa). Our evaluation used proprietary data from ABB Robotics in Sweden, comprising five years of resolved industrial bug reports, each linked to its verified code fix. This setting allowed us to assess model effectiveness under realistic industrial constraints. Our results showed that traditional models using term frequency-inverse document features consistently outperformed the fine-tuned language models on this dataset, while data augmentation improved Random Forest performance. These findings challenge the assumption that transformer-based models universally outperform classical approaches in industrial contexts with domain-specific data. We demonstrated that historical bug reports can be systematically used for text-based, artificial intelligence-assisted fault localization, providing a scalable, low-cost, and empirically grounded complement to common debugging practices in industry.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.