ArXiv TLDR

Neurosymbolic Repo-level Code Localization

🐦 Tweet
2604.16021

Xiufeng Xu, Xiufeng Wu, Zejun Zhang, Yi Li

cs.SEcs.AI

TLDR

LogicLoc, a neurosymbolic framework, uses LLMs and Datalog to fix the 'Keyword Shortcut' in code localization, improving structural reasoning.

Key contributions

  • Identified 'Keyword Shortcut' bias in code localization benchmarks, where models rely on superficial lexical matching.
  • Introduced KA-LogicQuery, a diagnostic benchmark requiring structural reasoning without any naming hints.
  • Proposed LogicLoc, a neurosymbolic framework combining LLMs with Datalog for precise code localization.
  • LogicLoc significantly outperforms SOTA on KA-LogicQuery with lower token consumption and faster execution.

Why it matters

This work addresses a critical flaw in current code localization models, which fail at genuine structural reasoning. By exposing the 'Keyword Shortcut' and introducing LogicLoc, it paves the way for more robust and verifiable autonomous software engineering systems. This improves reliability and efficiency.

Original Abstract

Code localization is a cornerstone of autonomous software engineering. Recent advancements have achieved impressive performance on real-world issue benchmarks. However, we identify a critical yet overlooked bias: these benchmarks are saturated with keyword references (e.g. file paths, function names), encouraging models to rely on superficial lexical matching rather than genuine structural reasoning. We term this phenomenon the Keyword Shortcut. To address this, we formalize the challenge of Keyword-Agnostic Logical Code Localization (KA-LCL) and introduce KA-LogicQuery, a diagnostic benchmark requiring structural reasoning without any naming hints. Our evaluation reveals a catastrophic performance drop of state-of-the-art approaches on KA-LogicQuery, exposing their lack of deterministic reasoning capabilities. We propose LogicLoc, a novel agentic framework that combines large language models with the rigorous logical reasoning of Datalog for precise localization. LogicLoc extracts program facts from the codebase and leverages an LLM to synthesize Datalog programs, with parser-gated validation and mutation-based intermediate-rule diagnostic feedback to ensure correctness and efficiency. The validated programs are executed by a high-performance inference engine, enabling accurate and verifiable localization in a fully automated, closed-loop workflow. Experimental results demonstrate that LogicLoc significantly outperforms SOTA methods on KA-LogicQuery while maintaining competitive performance on popular issue-driven benchmarks. Notably, LogicLoc attains superior performance with significantly lower token consumption and faster execution by offloading structural traversal to a deterministic engine, reducing the overhead of iterative LLM inference.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.