Test Design and Review Argumentation in AI-Assisted Test Generation
Eduard Paul Enoiu, Robert Feldt
TLDR
This paper introduces a taxonomy and template for AI-assisted test generation, focusing on the argumentation behind test design decisions.
Key contributions
- Proposes a conceptual taxonomy and structured template for AI-assisted test generation.
- Characterizes test cases by their test goal, claim, reason, and evidence.
- Helps engineers understand the argumentation behind AI-generated test decisions.
- Supports both constructive test design and retrospective review of test arguments.
Why it matters
As AI generates more tests, understanding *why* they exist becomes crucial for engineers. This work provides a structured way to represent test design arguments, improving review and trust in AI-assisted testing.
Original Abstract
AI assistants can increasingly generate and evolve test cases. The challenge is no longer merely to produce them, but also to help engineers understand why a generated artefact exists and what supports it. Existing work has focused on classifying testing techniques, linking requirements to tests and structuring system assurance arguments, but it does not explicitly represent the argumentation behind individual test design decisions. We propose a conceptual taxonomy and a structured template for AI-assisted test generation that characterizes a test case by its test goal, claim, reason, and evidence. The taxonomy is intended for both constructive use during test design and retrospective use during review, to assess the quality of the attached argument rather than the plausibility or objective value of the generated test cases.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.