Similar Pattern Annotation via Retrieval Knowledge for LLM-Based Test Code Fault Localization
Golnaz Gharachorlu, Mahsa Panahandeh, Lionel C. Briand, Ruifeng Gao, Ruiyuan Wan
TLDR
SPARK improves LLM-based Test Code Fault Localization by retrieving and annotating similar fault patterns from CI debugging knowledge, enhancing accuracy.
Key contributions
- Introduces SPARK, a framework for LLM-based Test Code Fault Localization (TCFL) using CI debugging knowledge.
- Retrieves similar fault-labeled test cases to annotate suspicious lines, guiding LLM reasoning effectively.
- Avoids prompt-length explosion common in naive retrieval-augmented LLM approaches for TCFL.
- Outperforms existing LLM-based TCFL baselines on industrial datasets, identifying more faults.
Why it matters
TCFL is a critical yet under-researched problem in CI environments, where faulty test scripts can cause significant delays. SPARK addresses this by leveraging historical debugging knowledge to make LLM-based fault localization more effective. This advancement helps developers quickly pinpoint and fix issues in large test suites, improving software quality and development efficiency.
Original Abstract
Software failures remain a major challenge in modern software development, and identifying the code elements responsible for failures is a time-consuming debugging task. While extensive research has focused on fault localization in the system under test (SUT), failures can also originate from faulty system test scripts. This problem, known as Test Code Fault Localization (TCFL), has received significantly less attention despite its importance in continuous integration (CI) environments where large test suites are executed frequently. TCFL is particularly challenging because it typically operates under black-box conditions, relies on limited diagnostic signals such as error messages and partial logs, and involves large system-level test scripts that expand the fault localization search space. In this paper, we propose SPARK, a framework that integrates accumulated debugging knowledge from continuous integration (CI) environments into Large Language Model (LLM)-based TCFL. Given a newly observed failing test case, SPARK retrieves similar fault-labeled test cases from a debugging knowledge corpus and selectively annotates suspicious lines of the failing test based on their similarity to previously observed fault patterns. These annotations guide the LLM's reasoning while maintaining scalability and avoiding the prompt-length explosion common to naive retrieval-augmented approaches. We evaluate SPARK on three industrial datasets containing real-world faulty Python test cases from different software products. The results show that SPARK consistently improves fault localization effectiveness compared to the existing LLM-based TCFL baseline while maintaining comparable inference cost and token usage. In particular, the approach advances the state of the art by identifying more correct faulty locations in complex test cases containing multiple faults.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.