Evaluating LLM-Based 0-to-1 Software Generation in End-to-End CLI Tool Scenarios
Ruida Hu, Xinchen Wang, Chao Peng, Cuiyun Gao, David Lo
TLDR
This paper introduces CLI-Tool-Bench, a new benchmark for evaluating LLM-based 0-to-1 software generation, revealing current models struggle with end-to-end CLI tool creation.
Key contributions
- Identifies limitations in existing benchmarks for 0-to-1 software generation, lacking structure planning and end-to-end validation.
- Introduces CLI-Tool-Bench, a structure-agnostic benchmark for ground-up CLI tool generation.
- Features 100 diverse real-world repositories and uses black-box differential testing in sandboxes.
- Reveals state-of-the-art LLMs achieve under 43% success, often generating monolithic code.
Why it matters
This paper addresses a critical gap in evaluating LLM agents' ability to build complete software from scratch. By introducing CLI-Tool-Bench, it provides a robust, real-world benchmark for 0-to-1 generation. The findings highlight significant challenges for current LLMs, guiding future research in intent-driven development.
Original Abstract
Large Language Models (LLMs) are driving a shift towards intent-driven development, where agents build complete software from scratch. However, existing benchmarks fail to assess this 0-to-1 generation capability due to two limitations: reliance on predefined scaffolds that ignore repository structure planning, and rigid white-box unit testing that lacks end-to-end behavioral validation. To bridge this gap, we introduce CLI-Tool-Bench, a structure-agnostic benchmark for evaluating the ground-up generation of Command-Line Interface (CLI) tools. It features 100 diverse real-world repositories evaluated via a black-box differential testing framework. Agent-generated software is executed in sandboxes, comparing system side effects and terminal outputs against human-written oracles using multi-tiered equivalence metrics. Evaluating seven state-of-the-art LLMs, we reveal that top models achieve under 43% success, highlighting the ongoing challenge of 0-to-1 generation. Furthermore, higher token consumption does not guarantee better performance, and agents tend to generate monolithic code.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.