Less Is More: Measuring How LLM Involvement affects Chatbot Accuracy in Static Analysis
TLDR
This paper compares three LLM architectures for static analysis query generation, finding that a structured intermediate representation significantly improves accuracy.
Key contributions
- Compared three LLM architectures for natural language to static analysis query translation.
- Evaluated direct generation, schema-constrained JSON IR, and tool-augmented agentic generation.
- Found schema-constrained JSON IR achieved highest accuracy, outperforming direct generation by 15-25%.
- Showed structured intermediates are most effective for large models, reducing token consumption vs. agentic methods.
Why it matters
LLMs are increasingly used in static analysis, but optimal integration is unclear. This paper shows a structured intermediate representation significantly boosts accuracy and efficiency, offering a key design principle for effective LLM tools in formal domains.
Original Abstract
Large language models are increasingly used to make static analysis tools accessible through natural language, yet existing systems differ in how much they delegate to the LLM without treating the degree of delegation as an independent variable. We compare three architectures along a spectrum of LLM involvement for translating natural language to Joern's query language \cpgql{}: direct query generation (\approach{1}), generation of a schema-constrained JSON intermediate representation (\approach{2}), and tool-augmented agentic generation (\approach{3}). These are evaluated on a benchmark of 20 code analysis tasks across three complexity tiers, using four open-weight models in a 2\(\times\)2 design (two model families \(\times\) two scales), each with three repetitions. The structured intermediate representation (\approach{2}) achieves the highest result match rates, outperforming direct generation by 15--25 percentage points on large models and surpassing the agentic approach despite the latter consuming 8\(\times\) more tokens. The benefit of structured intermediates is most pronounced for large models; for small models, schema compliance becomes the bottleneck. These findings suggest that in formally structured domains, constraining the LLM's output to a well-typed intermediate representation and delegating query construction to deterministic code yields better results than either unconstrained generation or iterative tool use.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.