ArXiv TLDR

What Makes a Good Terminal-Agent Benchmark Task: A Guideline for Adversarial, Difficult, and Legible Evaluation Design

🐦 Tweet
2604.28093

Ivan Bercovich

cs.AI

TLDR

This paper provides guidelines for designing adversarial, difficult, and legible terminal-agent benchmark tasks for LLMs, avoiding common pitfalls.

Key contributions

  • Proposes guidelines for designing adversarial, difficult, and legible terminal-agent benchmark tasks.
  • Identifies and catalogs common failure modes in benchmark design, often due to treating tasks like prompts.
  • Argues for conceptual difficulty over environmental difficulty in tasks to accurately assess LLM capabilities.
  • Presents empirical evidence that over 15% of popular terminal-agent benchmarks are reward-hackable.

Why it matters

This paper is crucial for improving the reliability of LLM evaluation. It helps benchmark designers create robust tasks, preventing misleading results from hackable or poorly designed tests. This ensures more accurate measurement of LLM capabilities.

Original Abstract

Terminal-agent benchmarks have become a primary signal for measuring the coding and system-administration capabilities of large language models. As the market for evaluation environments grows, so does the pressure to ship tasks quickly, often without thorough adversarial review of the verification logic. This paper is a guideline for writing good benchmark tasks, drawn from over a year of contributing to and reviewing tasks for Terminal Bench. Most people write benchmark tasks the way they write prompts. They shouldn't. A prompt is designed to help the agent succeed; a benchmark is designed to find out if it can. We argue that good tasks are adversarial, difficult, and legible, and that a large class of common failure modes -- AI-generated instructions, over-prescriptive specifications, clerical difficulty, oracle solutions that assume hidden knowledge, tests that validate the wrong things, and reward-hackable environments -- are predictable consequences of treating task authoring as prompt authoring. We catalog these failure modes, argue that real difficulty is conceptual rather than environmental, and discuss recent empirical evidence that over 15% of tasks in popular terminal-agent benchmarks are reward-hackable. We hope this serves as a useful reference for benchmark maintainers, task contributors, and researchers using benchmark scores as evidence.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.