Buying the Right to Monitor:Editorial Design in AI-Assisted Peer Review
TLDR
AI in peer review causes reviewer effort collapse, requiring editors to loosen acceptance standards and invest in AI detection, a policy reversal.
Key contributions
- Models AI's dual impact on authors (polishing) and reviewers (report generation) in peer review.
- Shows reviewer effort collapses discontinuously when AI capability reaches a critical threshold.
- Identifies a counterintuitive policy reversal for editors: loosen acceptance standards post-AI.
- Proves editors must loosen standards and invest in AI detection after the AI transition.
Why it matters
This paper offers crucial insights into managing AI's disruptive impact on peer review, revealing a counterintuitive policy reversal. Editors must loosen acceptance standards and invest in AI detection post-AI to maintain signal informativeness, vital for policy.
Original Abstract
Generative AI acts as a disruptive technological shock to evaluative organizations. In academic peer review, it enters both sides of the market: authors use AI to polish submissions, and reviewers use it to generate plausible reports without exerting evaluative effort. We develop a three-sided equilibrium model to analyze this dual adoption and derive a counterintuitive managerial implication for journal policy. We show that when AI capability crosses a critical threshold, reviewer effort collapses discontinuously. This transition creates a welfare misalignment: authors benefit from a weakened ``rat race,'' while editors suffer from degraded signal informativeness. Characterizing the editor's optimal constrained response, we identify a strict policy reversal. Before the AI transition, editors should tighten acceptance standards to curb rent-dissipating author polishing. After the transition, conventional intuition fails: editors must loosen acceptance standards while investing in AI detection, because further tightening only amplifies dissipative polishing without improving sorting. We prove analytically that this sign reversal is a structural consequence of the reviewer effort collapse under log-concave quality distributions. Ultimately, addressing AI in evaluative systems requires treating monitoring and loosened selectivity as complementary design instruments.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.