Human-AI Co-Evolution and Epistemic Collapse: A Dynamical Systems Perspective
Xuening Wu, Yanlan Kang, Qianya Xu, Kexuan Xie, Jiaqi Mi + 3 more
TLDR
This paper models human-AI interaction as a coupled dynamical system, revealing how increasing AI reliance can lead to 'epistemic collapse' and reduced knowledge diversity.
Key contributions
- Proposes a unified dynamical systems model for human-AI co-evolution, linking human cognition, data quality, and model capability.
- Identifies three distinct dynamical regimes: co-evolutionary enhancement, fragile equilibrium, and degenerative convergence.
- Demonstrates via simulation that increased AI reliance can induce a transition to a low-diversity, suboptimal knowledge equilibrium.
- Frames this 'epistemic collapse' as an emergent information bottleneck, where entropy reduction signifies diversity loss.
Why it matters
This paper reveals how human-AI co-evolution can lead to 'epistemic collapse,' reducing knowledge diversity. It highlights the critical need to manage these feedback loops, as AI's trajectory depends on dynamic interactions, not just model design.
Original Abstract
Large language models (LLMs) are reshaping how knowledge is produced, with increasing reliance on AI systems for generation, summarization, and reasoning. While prior work has studied cognitive offloading in humans and model collapse in recursive training, these effects are typically considered in isolation. We propose a unified perspective: humans and language models form a coupled dynamical system linked by a feedback loop of usage, generation, and retraining. We introduce a minimal model with three variables -- human cognition, data quality, and model capability -- and show that this feedback can give rise to distinct dynamical regimes. Our analysis identifies three regimes: co-evolutionary enhancement, fragile equilibrium, and degenerative convergence. Through a simple simulation, we demonstrate that increasing reliance on AI can induce a transition toward a low-diversity, suboptimal equilibrium. From an information-theoretic perspective, this transition corresponds to an emergent information bottleneck in the human-AI loop, where entropy reduction reflects loss of diversity and support under closed-loop feedback rather than beneficial compression. These results suggest that the trajectory of AI systems is shaped not only by model design, but by the dynamics of human-AI co-evolution.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.