Beyond Code Reasoning: A Specification-Anchored Audit Framework for Expert-Augmented Security Verification
Masato Kamba, Hirotake Murakami, Akiyoshi Sannai
TLDR
SPECA is a new framework that audits security-critical software by deriving properties from natural-language specifications, finding spec-dependent bugs.
Key contributions
- Introduces SPECA, a framework for security auditing using properties derived from natural-language specifications.
- Enables detection of spec-dependent vulnerabilities and controlled cross-implementation comparisons.
- Achieves high precision and discovers new bugs on Ethereum and C/C++ benchmarks.
- Identifies common root causes for false positives, guiding future improvements in auditing.
Why it matters
This paper introduces a novel approach to security auditing that moves beyond code-centric analysis, crucial for specification-governed systems like protocols and crypto libraries. By anchoring audits to natural-language specifications, SPECA significantly improves vulnerability detection and provides actionable insights into false positives, enhancing overall software security.
Original Abstract
Security-critical software is routinely audited by tools that reason about vulnerabilities as repository-local code patterns. Yet specification-governed systems -- protocol stacks, consensus implementations, cryptographic libraries -- are constrained by invariants and correctness conditions defined in natural-language specifications. When a vulnerability arises from what the specification requires rather than how code is written, code-level approaches lack the representational vocabulary to detect it, and their false positives resist systematic diagnosis. We present SPECA, a specification-anchored security audit framework that derives explicit, typed security properties from natural-language specifications and audits implementations through structured proof-attempt reasoning grounded in each property. The framework yields three capabilities absent from code-driven auditing: spec-dependent detections, controlled cross-implementation comparison under a shared property vocabulary, and false positives that decompose into interpretable, pipeline-phase-traceable root causes. On the Sherlock Ethereum Fusaka Audit Contest (366 submissions, 10 implementations), SPECA recovers all 15 in-scope vulnerabilities and independently discovers 4 bugs confirmed by developer fix commits. On the RepoAudit C/C++ benchmark (15 projects), SPECA matches the best published precision (88.9\%) while surfacing 12 candidate bugs beyond the established ground truth, two confirmed by upstream maintainers. A multi-model analysis reveals that more capable models audit more faithfully within property scope, shifting the detection bottleneck from model reasoning to property generation quality. All false positives trace to three recurring root causes -- trust boundary misunderstanding, code reading errors, and specification misinterpretation -- each yielding actionable improvement targets.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.