ArXiv TLDR

MEMCoder: Multi-dimensional Evolving Memory for Private-Library-Oriented Code Generation

🐦 Tweet
2604.24222

Mofei Li, Taozhi Chen, Guowei Yang, Jia Li

cs.SEcs.AIcs.CL

TLDR

MEMCoder improves private library code generation by autonomously evolving usage guidelines from problem-solving, outperforming RAG.

Key contributions

  • Proposes MEMCoder, an LLM framework for private-library-oriented code generation.
  • Introduces Multi-dimensional Evolving Memory to capture task-level API coordination and parameter constraints.
  • Implements a dual-source retrieval mechanism combining static docs with evolving usage guidelines.
  • Uses an automated closed loop with execution feedback to dynamically update memory and resolve conflicts.

Why it matters

LLMs fail at private library code generation due to missing training data. MEMCoder enables autonomous learning of usage guidelines, significantly boosting performance and making LLMs practical for enterprise-specific coding tasks.

Original Abstract

Large Language Models (LLMs) excel at general code generation, but their performance drops sharply in enterprise settings that rely on internal private libraries absent from public pre-training corpora. While Retrieval-Augmented Generation (RAG) offers a training-free alternative by providing static API documentation, we find that such documentation typically provides only isolated definitions, leaving a fundamental knowledge gap. Specifically, LLMs struggle with a task-level lack of coordination patterns between APIs and an API-level misunderstanding of parameter constraints and boundary conditions. To address this, we propose MEMCoder, a novel framework that enables LLMs to autonomously accumulate and evolve Usage Guidelines across these two dimensions. MEMCoder introduces a Multi-dimensional Evolving Memory that captures distilled lessons from the model's own problem-solving trajectories. During inference, MEMCoder employs a dual-source retrieval mechanism to inject both static documentation and relevant historical guidelines into the context. The framework operates in an automated closed loop by using objective execution feedback to reflect on successes and failures, resolve knowledge conflicts, and dynamically update memory. Extensive evaluations on the NdonnxEval and NumbaEval benchmarks demonstrate that MEMCoder substantially enhances existing RAG systems, yielding an average absolute pass@1 gain of 16.31%. Furthermore, MEMCoder exhibits vastly superior domain-specific adaptation compared to existing memory-based continual learning methods.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.