TestDecision: Sequential Test Suite Generation via Greedy Optimization and Reinforcement Learning
Guoqing Wang, Chengran Yang, Xiaoxuan Zhou, Zeyu Sun, Bo Wang + 2 more
TLDR
TestDecision uses greedy optimization and RL to enable open-source LLMs to generate high-quality, sequential test suites, boosting coverage and bug detection.
Key contributions
- Formalizes test suite generation as an MDP with monotone submodularity, enabling a greedy optimization approach.
- Introduces TestDecision, an RL-trained framework that transforms LLMs into neural greedy experts for sequential test generation.
- Achieves 38-52% higher branch coverage and 298-558% higher pass rates than existing methods.
- Finds 58-95% more bugs and shows strong generalization, matching GPT-5.2 performance with a 7B model.
Why it matters
Open-source LLMs struggle with generating effective test suites due to a lack of suite-level perspective. TestDecision addresses this by providing a principled approach to sequential test generation. This work significantly improves automated testing efficiency and bug detection, making advanced LLM-powered testing more accessible and cost-effective for industry.
Original Abstract
With the rapid evolution of LLMs, automated software testing is witnessing a paradigm shift. While proprietary models like GPT-4o demonstrate impressive capabilities, their high deployment costs and data privacy concerns make open-source LLMs the practical imperative for many academic and industrial scenarios. In the field of automated test generation, it has evolved to iterative workflows to construct test suites based on LLMs. When utilizing open-source LLMs, we empirically observe they lack a suite-level perspective, suffering from structural myopia-failing to generate new tests with large marginal gain based on the current covered status. In this paper, from the perspective of sequences, we formalize test suite generation as a MDP and demonstrate that its objective exhibits monotone submodularity, which enables an effective relaxation of this NP-hard global optimization into a tractable step-wise greedy procedure. Guided by this insight, we propose TestDecision, which transforms LLMs into neural greedy experts. TestDecision consists of two synergistic components: (1) an inference framework which implements test suite construction following a step-wise greedy strategy; and (2) a training pipeline of reinforcement learning which equips the base LLM with sequential test generation ability to maximize marginal gain. Comprehensive evaluations on the ULT benchmark demonstrate that TestDecision significantly outperforms existing advanced methods. It brings an improvement between 38.15-52.37% in branch coverage and 298.22-558.88% in execution pass rate over all base models, achieving a comparable performance on 7B backbone with a much larger proprietary LLM GPT-5.2. Furthermore, TestDecision can find 58.43-95.45% more bugs than vanilla base LLMs and exhibit superior generalization on LiveCodeBench, proving its capability to construct high-quality test suites.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.