CodeSpecBench: Benchmarking LLMs for Executable Behavioral Specification Generation
Zaoyu Chen, Jianbo Dai, Boyu Zhu, Jingdong Wang, Huiming Wang + 4 more
TLDR
CodeSpecBench is a new benchmark for evaluating LLMs' ability to generate executable behavioral specifications, revealing poor performance on complex tasks.
Key contributions
- Introduces CodeSpecBench, a new benchmark for executable behavioral specification generation for LLMs.
- Evaluates LLMs using an execution-based protocol on function-level and repository-level tasks.
- Reveals significant performance drop on repository-level tasks, with the best LLM achieving only 20.2% pass rate.
- Shows specification generation is harder than code generation, indicating LLMs lack deep semantic understanding.
Why it matters
LLMs can generate code, but this paper shows they struggle with precise behavioral specification generation, especially for complex projects. This reveals a critical gap in their understanding of program semantics. It suggests current LLM coding abilities don't equate to deep comprehension, urging research into true program behavior understanding.
Original Abstract
Large language models (LLMs) can generate code from natural language, but the extent to which they capture intended program behavior remains unclear. Executable behavioral specifications, defined via preconditions and postconditions, provide a concrete means to assess such understanding. However, existing work on specification generation is constrained in evaluation methodology, task settings, and specification expressiveness. We introduce CodeSpecBench, a benchmark for executable behavioral specification generation under an execution-based evaluation protocol. CodeSpecBench supports both function-level and repository-level tasks and encodes specifications as executable Python functions. Constructed from diverse real-world codebases, it enables a realistic assessment of both correctness (accepting valid behaviors) and completeness (rejecting invalid behaviors). Evaluating 15 state-of-the-art LLMs on CodeSpecBench, we observe a sharp performance degradation on repository-level tasks, where the best model attains only a 20.2% pass rate. We further find that specification generation is substantially more challenging than code generation, indicating that strong coding performance does not necessarily reflect deep understanding of intended program semantics. Our data and code are available at https://github.com/SparksofAGI/CodeSpecBench.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.