ArXiv TLDR

The Counterexample Game: Iterated Conceptual Analysis and Repair in Language Models

🐦 Tweet
2605.03936

Daniel Drucker, Kyle Mahowald

cs.CLcs.AI

TLDR

LMs can perform conceptual analysis via iterated counterexample games, but the repair loop quickly hits diminishing returns and produces verbose definitions.

Key contributions

  • LMs can perform iterated conceptual analysis using counterexample games.
  • LM-generated counterexamples are often invalid, but an LM judge accepts more than humans.
  • Validity judgments show moderate consistency between humans and LMs.
  • Extended iteration yields verbose definitions without improving accuracy.

Why it matters

This paper explores LMs' capacity for philosophical reasoning, specifically conceptual analysis. It highlights both the potential and limitations of LMs in sustained high-level iterated reasoning, suggesting the counterexample game as a valuable test case for future evaluations.

Original Abstract

Conceptual analysis -- proposing definitions and refining them through counterexamples -- is central to philosophical methodology. We study whether language models can perform this task through iterated analysis and repair chains: one model instance generates counterexamples to a proposed definition, another repairs the definition, and the process repeats. Across 20 concepts and thousands of counterexample-repair cycles, we find that, although many LM-generated counterexamples are judged invalid by both expert humans and an LM judge, the LM judge accepts roughly twice as many as humans do. Nonetheless, per-item validity judgments are moderately consistent across humans and between humans and the LM. We further find that extended iteration produces increasingly verbose definitions without improving accuracy. We also see that some concepts resist stable definitions in general. These findings suggest that while LMs can engage in philosophical reasoning, the counterexample-repair loop hits diminishing returns quickly and could be a fruitful test case for evaluating whether LMs can sustain high-level iterated philosophical reasoning.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.