Strategic Algorithmic Monoculture:Experimental Evidence from Coordination Games
Gonzalo Ballestero, Hadi Hosseini, Samarth Khanna, Ran I. Shorrer
TLDR
LLMs exhibit strategic algorithmic monoculture in coordination games, adapting similarity to incentives like humans, but struggle with heterogeneity.
Key contributions
- Introduces "strategic algorithmic monoculture" where agents adjust action similarity based on incentives.
- Experimental design separates baseline similarity (primary) from incentive-driven similarity (strategic).
- LLMs show high baseline similarity and strategically regulate it in coordination games, like humans.
- LLMs coordinate well on similar actions but struggle to sustain heterogeneity when divergence is rewarded.
Why it matters
This paper is crucial for understanding how AI agents, especially LLMs, coordinate and adapt their behavior in multi-agent environments. It highlights that while LLMs can strategically align, their difficulty in maintaining diverse actions when rewarded for it poses a challenge for complex AI systems.
Original Abstract
AI agents increasingly operate in multi-agent environments where outcomes depend on coordination. We distinguish primary algorithmic monoculture -- baseline action similarity -- from strategic algorithmic monoculture, whereby agents adjust similarity in response to incentives. We implement a simple experimental design that cleanly separates these forces, and deploy it on human and large language model (LLM) subjects. LLMs exhibit high levels of baseline similarity (primary monoculture) and, like humans, they regulate it in response to coordination incentives (strategic monoculture). While LLMs coordinate extremely well on similar actions, they lag behind humans in sustaining heterogeneity when divergence is rewarded.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.