ArXiv TLDR

Constraint Decay: The Fragility of LLM Agents in Backend Code Generation

🐦 Tweet
2605.06445

Francesco Dente, Dario Satriani, Paolo Papotti

cs.SEcs.AI

TLDR

LLM agents struggle significantly with structural constraints in backend code generation, showing "constraint decay" as requirements accumulate.

Key contributions

  • Systematic study on LLM agents' ability to handle structural constraints in multi-file backend code.
  • Introduces "constraint decay": agent performance substantially declines as structural requirements accumulate.
  • Agents perform better in minimal frameworks (Flask) but struggle in convention-heavy ones (FastAPI, Django).
  • Identifies data-layer defects (query composition, ORM violations) as primary causes of failure.

Why it matters

This paper reveals a critical limitation of LLM coding agents: their inability to jointly satisfy functional and structural requirements for production-grade software. It highlights a key open challenge, especially for complex, real-world backend development.

Original Abstract

Large Language Model (LLM) agents demonstrate strong performance in autonomous code generation under loose specifications. However, production-grade software requires strict adherence to structural constraints, such as architectural patterns, databases, and object-relational mappings. Existing benchmarks often overlook these non-functional requirements, rewarding functionally correct but structurally arbitrary solutions. We present a systematic study evaluating how well agents handle structural constraints in multi-file backend generation. By fixing a unified API contract across 80 greenfield generation tasks and 20 feature-implementation tasks spanning eight web frameworks, we isolate the effect of structural complexity using a dual evaluation with end-to-end behavioral tests and static verifiers. Our findings reveal a phenomenon of constraint decay: as structural requirements accumulate, agent performance exhibits a substantial decline. Capable configurations lose 30 points on average in assertion pass rates from baseline to fully specified tasks, while some weaker configurations approach zero. Framework sensitivity analysis exposes significant performance disparities: agents succeed in minimal, explicit frameworks (e.g., Flask) but perform substantially worse on average in convention-heavy environments (e.g., FastAPI, Django). Finally, error analysis identifies data-layer defects (e.g., incorrect query composition and ORM runtime violations) as the leading root causes. This work highlights that jointly satisfying functional and structural requirements remains a key open challenge for coding agents.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.