ArXiv TLDR

Learned or Memorized ? Quantifying Memorization Advantage in Code LLMs

🐦 Tweet
2604.13997

Djiré Albérick Euraste, Kaboré Abdoul Kader, Jordan Samhi, Earl T. Barr, Jacques Klein + 1 more

cs.SE

TLDR

This paper introduces a perturbation method to quantify memorization in code LLMs, finding memorization risk is highly task- and model-dependent.

Key contributions

  • Introduces a perturbation method to quantify 'memorization advantage' in code LLMs.
  • Evaluates 8 open-source code LLMs on 19 benchmarks across 4 task families.
  • Finds memorization sensitivity varies widely across models and tasks (e.g., StarCoder high, QwenCoder low).
  • Shows CVEFixes and Defects4J have low memorization, suggesting learned generalization.

Why it matters

This work provides a crucial method to quantify memorization in code LLMs, addressing data leakage and transparency concerns. It reveals that memorization risk is highly dependent on the specific model and task, challenging assumptions about certain benchmarks. These findings highlight the urgent need for improved evaluation protocols, especially for security-critical applications.

Original Abstract

The lack of transparency about code datasets used to train large language models (LLMs) makes it difficult to detect, evaluate, and mitigate data leakage. We present a perturbation-based method to quantify memorization advantage in code LLMs, defined as the performance gap between likely seen and unseen inputs. We evaluate 8 open-source code LLMs on 19 benchmarks across four task families: code generation, code understanding, vulnerability detection, and bug fixing. Sensitivity patterns vary widely across models and tasks. For example, StarCoder reaches high sensitivity on some benchmarks (up to 0.8), while QwenCoder remains lower (mostly below 0.4), suggesting differences in generalization behavior. Task categories also differ: code summarization tends to show low sensitivity, whereas test generation is substantially higher. We then analyze two widely discussed benchmarks, CVEFixes and Defects4J, often suspected of leakage. Contrary to common concerns, both show low memorization advantage across models: CVEFixes remains below 0.1, and Defects4J is lower than other program repair benchmarks. These results suggest that, for these datasets, models may rely more on learned generalization than direct memorization. Overall, our findings provide evidence that memorization risk is highly task- and model-dependent, and highlight the need for stronger evaluation protocols, especially in security-focused settings.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.