ArXiv TLDR

Teaching LLMs Program Semantics via Symbolic Execution Traces

🐦 Tweet
2605.06184

Jonas Bayer, Stefan Zetzsche, Olivier Bouissou, Remi Delmas, Michael Tautschnig + 1 more

cs.SEcs.LGcs.PL

TLDR

This paper improves LLM program semantics by training them on symbolic execution traces, boosting bug detection by over 17%.

Key contributions

  • Introduced a new 500-task C verification benchmark for LLM evaluation across 5 property types.
  • Identified that LLMs struggle with program violation detection, particularly for longer code examples.
  • Trained Qwen3-8B on ~3,000 symbolic execution bug traces, boosting violation detection by 17% with CoT.
  • Demonstrated that trace training combined with CoT reasoning yields superadditive gains, outperforming larger models.

Why it matters

LLMs often struggle with detecting subtle program violations, a critical gap in code reasoning. This paper significantly improves LLM bug detection by training them on symbolic execution traces, making them more reliable for formal verification tasks and generalizing across various property types.

Original Abstract

We introduce an evaluation framework of 500 C verification tasks across five property types (memory safety, overflow, termination, reachability, data races) built on SV-COMP 2025, and evaluate 14 models across six families. We find that high overall accuracy masks a critical weakness: while most models reliably confirm properties hold, violation detection varies widely and degrades sharply with program length. To close this gap, we train on formal verification artifacts: running the Soteria symbolic execution engine on generic open-source C code and using the resulting traces for continued pretraining of Qwen3-8B. Just ${\sim}$3,000 bug traces combined with chain-of-thought reasoning at inference time improve violation detection by over 17 percentage points, producing one of the most balanced accuracy profiles among evaluated models. On violation detection, the trained 8B model outperforms the 4$\times$ larger Qwen3-32B without thinking and approaches it in overall accuracy. The interaction between trace training and chain-of-thought is superadditive: neither alone provides meaningful gains, but their combination does. Improvements transfer across all five property types, including ones the training traces do not target. Our 28 configurations confirm the gains stem from trace semantics, not code volume, and that trace curation and format matter.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.