ArXiv TLDR

Natural Language based Specification and Verification

🐦 Tweet
2605.11315

Zhaorui Li, Chengyu Song

cs.SEcs.AIcs.CR

TLDR

This paper explores using LLMs to generate and verify code implementations based on natural language specifications, showing promising preliminary results.

Key contributions

  • Investigates using LLMs for generating natural language specifications.
  • Employs LLMs for compositional verification using natural language specifications.
  • Aims to prevent vulnerable code generation by LLMs through this natural language approach.
  • Preliminary results suggest the natural language specification and verification method is promising.

Why it matters

This paper addresses the critical challenge of preventing large language models from generating vulnerable code. By enabling specification and verification in natural language, it makes formal verification more accessible and practical for LLM-generated systems. This approach could significantly improve the security of AI-produced software.

Original Abstract

Recent frontier large language models (LLMs) have shown strong performance in identifying security vulnerabilities in large, mature open-source systems. As LLM-generated code becomes increasingly common, a natural goal is to prevent such models from producing vulnerable implementations in the first place. Formal verification offers a principled route to this objective, but existing verification pipelines typically require specifications written in rigid formal languages. Prior work has explored using LLMs to synthesize such specifications, with limited success. In this paper, we investigate a different approach: using LLMs both to generate specifications and to verify implementations compositionally when the specifications are expressed in natural language. Our preliminary results suggest that this approach is promising.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.