ArXiv TLDR

When LLMs Lag Behind: Knowledge Conflicts from Evolving APIs in Code Generation

🐦 Tweet
2604.09515

Ahmed Nusayer Ashik, Shaowei Wang, Tse-Hsun Chen, Muhammad Asaduzzaman, Yuan Tian

cs.SE

TLDR

LLMs struggle with evolving APIs, even with RAG, due to context-memory conflicts, requiring new benchmarks and techniques for reliable code generation.

Key contributions

  • Created a benchmark of 270 real-world API updates from 8 Python libraries to study LLM code generation.
  • Showed LLMs struggle with evolving APIs, achieving only 42.55% executable code without comprehensive docs.
  • Even with structured docs, LLMs only reach 66.36% executability due to persistent outdated knowledge.
  • Reasoning strategies (e.g., Self-Reflection) boost executability by 11%, highlighting their potential.

Why it matters

This paper highlights a critical limitation of LLMs in practical code generation: their inability to adapt to rapidly evolving APIs. It demonstrates that even with external context, LLMs often prioritize outdated internal knowledge. The findings underscore the urgent need for new benchmarks and techniques to make LLMs reliable for real-world software development.

Original Abstract

The rapid evolution of software libraries creates a significant challenge for Large Language Models (LLMs), whose static parametric knowledge often becomes stale post-training. While retrieval-augmented generation (RAG) is commonly used to provide up-to-date API specifications, "context-memory conflict" arises when external instructions contradict a model's internal parametric knowledge. This paper presents a systematic empirical study of LLM code generation under API evolution (e.g., API deprecation, API modification, and API addition), by constructing a benchmark of 270 real-world updates from eight Python libraries. We evaluate four LLM families of 11 models. Our results show that without comprehensive documentation, LLMs struggle to prioritize external context, averaging only 42.55% of generated code examples are executable in the target environment. While structured documentation and larger model scales improve LLMs' ability to update adoption, they do not fully resolve executability issues with a low 66.36% executable rate. In addition, reasoning-based strategies (e.g., Self-Reflection) significantly boost LLMs' performance with 11% improvement on executable rate. Our findings highlight the persistence of outdated patterns from LLMs, even when API update specifications are provided, and emphasize the need for evolution-aware benchmarks and techniques.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.