ArXiv TLDR

Crash-free Deductive Verifiers

🐦 Tweet
2604.19448

Wander Nauta, Marcus Gerhold, Marieke Huisman

cs.SE

TLDR

This paper proposes fuzzing to enhance the robustness and reliability of complex deductive verifiers, demonstrated with the AValAnCHE tool.

Key contributions

  • Advocates fuzzing as a practical method to improve the quality and robustness of deductive verifiers.
  • Introduces AValAnCHE, a prototype fuzzing tool integrated with the VerCors verifier.
  • AValAnCHE successfully uncovered several issues in VerCors, enhancing its reliability.
  • Demonstrates the fuzzing approach is effective for other deductive verifiers as well.

Why it matters

Deductive verifiers are crucial for software correctness but often lack out-of-the-box reliability. This paper offers a practical fuzzing approach to make these complex tools more robust and trustworthy for a wider user base.

Original Abstract

As deductive verifiers mature, their potential user base is growing from the initial core developers to other users. To convince external users of the suitability of verifiers, these tools must run reliably out of the box, give meaningful error messages and display correct results. Yet deductive verifiers are large and complex software systems and their own full verification is often out of reach. We therefore need complementary means to provide such guarantees. This paper advocates the use of fuzzing as a practical way to improve the quality and robustness of deductive verifiers. We outline how fuzz testing can be applied to deductive verifiers, and demonstrate the idea with the prototype tool AValAnCHE, which is integrated with the VerCors verifier. We report on our experiments in which AValAnCHE uncovered several issues in VerCors and demonstrate that the approach also works for other deductive verifiers

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.