On Benchmark Hacking in ML Contests: Modeling, Insights and Design
Xiaoyun Qiu, Yang Yu, Haifeng Xu
TLDR
This paper models benchmark hacking in ML contests, revealing strategic effort allocation and reward impacts on true generalization.
Key contributions
- Defines benchmark hacking via equilibrium effort allocation in ML contests.
- Shows low-type contestants tend to hack benchmarks, high-types do not.
- Demonstrates skewed rewards encourage better contest outcomes.
- Supports theory with empirical evidence from contest data.
Why it matters
Understanding benchmark hacking helps design contests that promote genuine model improvements. This work guides reward structures to reduce gaming and enhance true generalization.
Original Abstract
Benchmark hacking refers to tuning a machine learning model to score highly on certain evaluation criteria without improving true generalization or faithfully solving the intended problem. We study this phenomenon in a generic machine learning contest, where each contestant chooses two types of effort: creative effort that improves model capability as desired by the contest host, and mechanistic effort that only improves the model's fitness to the particular task in contest without contributing to true generalization. We establish the existence of a symmetric monotone pure strategy equilibrium in this competition game. It also provides a natural definition of benchmark hacking in this strategic context by comparing a player's equilibrium effort allocation to that of a single-agent baseline scenario. Under our definition, contestants with types below certain threshold (low types) always engage in benchmark hacking, whereas those above the threshold do not. Furthermore, we show that more skewed reward structures (favoring top-ranked contestants) can elicit more desirable contest outcomes. We also provide empirical evidence to support our theoretical predictions.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.