From Plan to Action: How Well Do Agents Follow the Plan?
Shuyang Liu, Saman Dehghan, Jatin Ganhotra, Martin Hirzel, Reyhaneh Jabbarvand
TLDR
This paper systematically analyzes programming agents' compliance with instructed plans, revealing how plan quality and reminders impact task success.
Key contributions
- First systematic analysis of plan compliance in programming agents (16,991 trajectories).
- Standard plans improve issue resolution; periodic reminders boost success and reduce violations.
- Subpar plans hurt performance more than no plan; early augmentation can degrade it.
- Agents without explicit plans use internalized, often incomplete, workflows.
Why it matters
Understanding plan compliance is crucial for assessing agent reasoning, not just task success. This work reveals how plan quality impacts performance and highlights the need for fine-tuning agents to adaptively follow instructions rather than memorizing workflows.
Original Abstract
Agents aspire to eliminate the need for task-specific prompt crafting through autonomous reason-act-observe loops. Still, they are commonly instructed to follow a task-specific plan for guidance, e.g., to resolve software issues following phases for navigation, reproduction, patch, and validation. Unfortunately, it is unknown to what extent agents actually follow such instructed plans. Without such an analysis, determining the extent agents comply with a given plan, it is impossible to assess whether a solution was reached through correct strategic reasoning or through other means, e.g., data contamination or overfitting to a benchmark. This paper presents the first extensive, systematic analysis of plan compliance in programming agents, examining 16,991 trajectories from SWE-agent across four LLMs on SWE-bench Verified and SWE-bench Pro under eight plan variations. Without an explicit plan, agents fall back on workflows internalized during training, which are often incomplete, overfit, or inconsistently applied. Providing the standard plan improves issue resolution, and we observe that periodic plan reminders can mitigate plan violations and improve task success. A subpar plan hurts performance even more than no plan at all. Surprisingly, augmenting a plan with additional task-relevant phases in the early stage can degrade performance, particularly when these phases do not align with the model's internal problem-solving strategy. These findings highlight a research gap: fine-tuning paradigms that teach models to follow instructed plans, rather than encoding task-specific plans in them. This requires teaching models to reason and act adaptively, rather than memorizing workflows.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.