ArXiv TLDR

RealBench: A Repo-Level Code Generation Benchmark Aligned with Real-World Software Development Practices

🐦 Tweet
2604.22659

Jia Li, Hongyi Deng, Yiran Zhang, Kechi Zhang, Tianqi Shao + 7 more

cs.SE

TLDR

RealBench is a new benchmark for repo-level code generation, using structured designs (UML) to better align LLM evaluation with real-world software development.

Key contributions

  • Introduces RealBench, a repo-level code generation benchmark reflecting real-world software development practices.
  • Each benchmark example includes natural language requirements and UML diagrams for system design.
  • LLMs show much worse performance on repo-level generation, with significant gaps among models.
  • LLMs are good at finding/creating modules but often generate poor quality code with grammar and logic errors.

Why it matters

Current LLM code generation benchmarks don't reflect industry practices, leading to inaccurate performance assessments. RealBench bridges this gap by evaluating LLMs with structured designs, providing crucial insights into their real-world capabilities and limitations for complex software tasks.

Original Abstract

Writing code requires significant time and effort in software development. To automate this process, researchers have made substantial progress using Large Language Models (LLMs) for code generation. Many benchmarks like HumanEval and EvoCodeBench have been created to evaluate LLMs by requiring them to generate code from natural language requirements. However, in enterprise applications and team development, developers typically write code based on structured designs or specifications rather than raw natural language descriptions. This gap between existing benchmarks and real industry development practices means that current benchmark scores may not accurately reflect how much code generation can help automate software development tasks. To address this gap, we propose RealBench, a repository-level code generation benchmark aligned with real-world industry software development practices. Each example includes both natural language requirements and UML diagrams as system design, matching how developers typically receive specifications. Based on the constructed benchmarks, we conduct a systematic evaluation of advanced LLMs' code generation capabilities when provided with structured system designs. The experimental results reveal key insights in current LLMs' capabilities for repo-level code generation aligned with real-world software development practices. First, we notice that regarding repo-level code generation, LLMs show much worse performance and there are significant performance gaps among LLMs. Second, LLMs are good at finding and creating modules defined in UML diagrams, but the quality of generated modules is often poor due to grammar and logic errors. Third, generating the entire repository at once is the best generation strategy on smaller repositories, while generating a complex repository with the module-by-module strategy works better compared to other strategies.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.