DryRUN: On the Role of Public Tests in LLM-Driven Code Generation
Kaushitha Silva, Srinath Perera
TLDR
DryRUN is an LLM framework for code generation that autonomously creates test inputs and simulates execution, eliminating human-provided public tests.
Key contributions
- Identifies and mitigates an "overconfidence gap" in LLM code generation due to public test reliance.
- DryRUN autonomously generates its own test inputs and simulates execution traces for self-correction.
- Eliminates the need for human-authored public test cases, a major bottleneck in software development.
- Matches state-of-the-art performance on LiveCodeBench without external feedback or public tests.
Why it matters
Current LLM code generation relies heavily on human-provided public tests, which is a labor-intensive bottleneck and leads to overfitting. DryRUN solves this by enabling LLMs to self-correct through autonomous input generation and simulation. This advances autonomous code generation, making it more practical and robust for real-world scenarios.
Original Abstract
Multi-agent frameworks are widely used in autonomous code generation and have applications in complex algorithmic problem-solving. Recent work has addressed the challenge of generating functionally correct code by incorporating simulation-driven planning and debugging, where language models trace execution steps to verify logic. However, these approaches depend on human-provided public test cases to ground the debugging and simulation loop. Manually authoring comprehensive input-output examples is a labor-intensive bottleneck in the software development lifecycle. Because ground-truth input-output examples are rarely available prior to implementation in real-world software engineering, this dependency restricts methods to curated competitive programming benchmarks. Furthermore, we identify that reliance on these public tests induces an ``overconfidence gap,'' causing frameworks to overfit to simplistic examples and fail on hidden evaluations. In contrast, we observe that external sample inputs are not strictly necessary for code generation. We demonstrate that large language models can autonomously generate valid inputs and simulate execution traces to self-correct. Consequently, we develop DryRUN, a framework that eliminates the need for ground-truth samples by allowing the LLM to iteratively plan, autonomously generate its own inputs and simulate execution, mitigating algorithmic overconfidence. Evaluations on the LiveCodeBench v6 dataset (post-March 2025) demonstrate that DryRUN matches performance against CodeSIM, a state-of-the-art and public-test-dependent framework, while operating entirely without public test cases or external execution feedback while reducing output token consumption.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.