Evaluating Generative Models as Interactive Emergent Representations of Human-Like Collaborative Behavior
Shinas Shaji, Teena Chakkalayil Hassan, Sebastian Houben, Alex Mitrevski
TLDR
Foundation model agents show emergent human-like collaborative behaviors in a 2D game, enhancing human-AI teamwork.
Key contributions
- Developed a 2D game to test human-AI collaboration on color-matching tasks.
- Defined five key collaborative behaviors as indicators of mental model representation.
- Used LLM-based judges to automatically detect collaborative behaviors with strong human agreement.
- User study showed positive human satisfaction and effective collaboration with LLM agents.
Why it matters
This paper demonstrates that foundation models can naturally exhibit complex collaborative behaviors, crucial for effective human-AI teamwork. It offers a validated framework and tools to analyze and improve embodied AI collaboration.
Original Abstract
Human-AI collaboration requires AI agents to understand human behavior for effective coordination. While advances in foundation models show promising capabilities in understanding and showing human-like behavior, their application in embodied collaborative settings needs further investigation. This work examines whether embodied foundation model agents exhibit emergent collaborative behaviors indicating underlying mental models of their collaborators, which is an important aspect of effective coordination. This paper develops a 2D collaborative game environment where large language model agents and humans complete color-matching tasks requiring coordination. We define five collaborative behaviors as indicators of emergent mental model representation: perspective-taking, collaborator-aware planning, introspection, theory of mind, and clarification. An automated behavior detection system using LLM-based judges identifies these behaviors, achieving fair to substantial agreement with human annotations. Results from the automated behavior detection system show that foundation models consistently exhibit emergent collaborative behaviors without being explicitly trained to do so. These behaviors occur at varying frequencies during collaboration stages, with distinct patterns across different LLMs. A user study was also conducted to evaluate human satisfaction and perceived collaboration effectiveness, with the results indicating positive collaboration experiences. Participants appreciated the agents' task focus, plan verbalization, and initiative, while suggesting improvements in response times and human-like interactions. This work provides an experimental framework for human-AI collaboration, empirical evidence of collaborative behaviors in embodied LLM agents, a validated behavioral analysis methodology, and an assessment of collaboration effectiveness.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.