ArXiv TLDR

A Metamorphic Testing Approach to Diagnosing Memorization in LLM-Based Program Repair

🐦 Tweet
2604.21579

Milan De Koning, Ali Asgari, Pouria Derakhshanfar, Annibale Panichella

cs.SEcs.AI

TLDR

This paper uses metamorphic testing and negative log-likelihood to diagnose memorization in LLM-based program repair, finding significant performance drops on transformed code.

Key contributions

  • Combines metamorphic testing (MT) with negative log-likelihood (NLL) to diagnose memorization in LLM program repair.
  • Creates variant benchmarks from Defects4J and GitBug-Java using semantics-preserving transformations.
  • Finds state-of-the-art LLMs show significant success rate drops (4.1% to 15.98%) on transformed code.
  • Demonstrates performance degradation strongly correlates with NLL, indicating memorization's impact.

Why it matters

Data leakage inflates LLM program repair performance, obscuring true capabilities. This paper offers a robust method to diagnose and mitigate memorization, ensuring more reliable evaluations. It helps advance LLM-based APR by focusing on genuine understanding over memorized solutions.

Original Abstract

LLM-based automated program repair (APR) techniques have shown promising results in reducing debugging costs. However, prior results can be affected by data leakage: large language models (LLMs) may memorize bug fixes when evaluation benchmarks overlap with their pretraining data, leading to inflated performance estimates. In this paper, we investigate whether we can better reveal data leakage by combining metamorphic testing (MT) with negative log-likelihood (NLL), which has been used in prior work as a proxy for memorization. We construct variant benchmarks by applying semantics-preserving transformations to two widely used datasets, Defects4J and GitBug-Java. Using these benchmarks, we evaluate the repair success rates of seven LLMs on both original and transformed versions, and analyze the relationship between performance degradation and NLL. Our results show that all evaluated state-of-the-art LLMs exhibit substantial drops in patch generation success rates on transformed benchmarks, ranging from -4.1% for GPT-4o to -15.98% for Llama-3.1. Furthermore, we find that this degradation strongly correlates with NLL on the original benchmarks, suggesting that models perform better on instances they are more likely to have memorized. These findings show that combining MT with NLL provides stronger and more reliable evidence of data leakage, while metamorphic testing alone can help mitigate its effects in LLM-based APR evaluations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.