Superminds Test: Actively Evaluating Collective Intelligence of Agent Society via Probing Agents
Xirui Li, Ming Li, Yunze Xiao, Ryan Wong, Dianqi Li + 2 more
TLDR
Superminds Test reveals that collective intelligence does not spontaneously emerge in large-scale LLM agent societies due to sparse, shallow interactions.
Key contributions
- Introduces "Superminds Test," a hierarchical framework to probe agent society intelligence.
- Empirically evaluates collective intelligence in MoltBook, a platform with 2M LLM agents.
- Finds a stark absence of collective intelligence; agents fail complex reasoning and coordination.
- Concludes that sparse, shallow interactions, not scale, limit current agent societies.
Why it matters
This paper challenges the assumption that collective intelligence naturally emerges with scale in LLM agent societies. It highlights that current interaction mechanisms are too shallow and sparse, preventing agents from effectively collaborating. This work is crucial for designing future multi-agent systems that can truly leverage collective potential.
Original Abstract
Collective intelligence refers to the ability of a group to achieve outcomes beyond what any individual member can accomplish alone. As large language model agents scale to populations of millions, a key question arises: Does collective intelligence emerge spontaneously from scale? We present the first empirical evaluation of this question in a large-scale autonomous agent society. Studying MoltBook, a platform hosting over two million agents, we introduce Superminds Test, a hierarchical framework that probes society-level intelligence using controlled Probing Agents across three tiers: joint reasoning, information synthesis, and basic interaction. Our experiments reveal a stark absence of collective intelligence. The society fails to outperform individual frontier models on complex reasoning tasks, rarely synthesizes distributed information, and often fails even trivial coordination tasks. Platform-wide analysis further shows that interactions remain shallow, with threads rarely extending beyond a single reply and most responses being generic or off-topic. These results suggest that collective intelligence does not emerge from scale alone. Instead, the dominant limitation of current agent societies is extremely sparse and shallow interaction, which prevents agents from exchanging information and building on each other's outputs.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.