ArXiv TLDR

Foundation Models as Oracles for Refactoring Correctness Detection

🐦 Tweet
2605.02096

Rohit Gheyi, Rian Melo, Jonhnanthan Oliveira, Marcio Ribeiro, Baldoino Fonseca

cs.SE

TLDR

Foundation models can effectively detect refactoring correctness issues in Java programs, outperforming traditional methods.

Key contributions

  • Evaluated FMs zero-shot on 226 real Java refactoring bugs from major IDEs.
  • GPT-5.4 achieved 93.8% accuracy in detecting refactoring correctness issues.
  • Gemini-3.1-Pro-Preview achieved the best overall detection accuracy among all models.
  • FMs provide explanations and operate across refactoring types without specific rules.

Why it matters

Refactoring tools often introduce bugs, undermining developer trust. This paper shows foundation models can significantly improve refactoring correctness detection. They offer a flexible, rule-agnostic approach to complement existing methods, potentially streamlining development workflows and enhancing code quality.

Original Abstract

Refactoring tools in popular Integrated Development Environments (IDEs) can introduce unintended behavioral changes or compilation errors, a persistent challenge that undermines developer trust in automated transformations. Traditional detection approaches rely on handcrafted preconditions, and static and dynamic analyses, yet remain limited in adaptability and can miss subtle correctness issues. This study examines the potential of foundation models to serve as oracles for detecting refactoring bugs in Java programs. We evaluate zero-shot prompting, without task-specific training, across 226 real refactoring bugs collected over more than a decade from widely used Java IDEs (IntelliJ-IDEA, Eclipse, and NetBeans), spanning 47 refactoring types. Our results indicate that foundation models can be effective for this task, although performance varies across models. In the first-run setting, GPT-OSS-20B achieved 80.5% accuracy, while GPT-5.4 reached 93.8%. We also evaluated other open and proprietary models: Gemma-4-31B achieved the strongest result among open models, and Gemini-3.1-Pro-Preview achieved the best overall result among all evaluated models. Metamorphic testing further shows that model predictions are largely consistent under intended semantics-preserving code variations, suggesting that superficial pattern matching may not fully account for the observed behavior. Beyond detection accuracy, foundation models can provide short explanations that may help support developer inspection, operate across refactoring types without explicitly encoded refactoring-specific rules, and may serve as lightweight triage aids in development workflows. Our findings suggest that foundation models can complement traditional refactoring checks by flagging suspicious transformations for developer inspection.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.