The Collaboration Gap in Human-AI Work
Varad Vishwarupe, Marina Jirotka, Nigel Shadbolt, Ivan Flechais
TLDR
This paper introduces a framework explaining why human-AI collaboration with LLMs often fails, emphasizing interaction grounding conditions.
Key contributions
- Analyzes why LLM-based human-AI collaboration often fails in practice.
- Introduces a conceptual framework based on interviews with 16 AI practitioners.
- Argues stable collaboration depends on interaction's "grounding conditions," not just model capability.
- Distinguishes three structures of human-AI work: one-shot, weak, and grounded collaboration.
Why it matters
This paper is crucial for understanding why human-AI collaboration with LLMs often fails. It provides a framework emphasizing interaction grounding over model capability, which is vital for designing more robust and less frustrating AI partnerships.
Original Abstract
LLMs are increasingly presented as collaborators in programming, design, writing, and analysis. Yet the practical experience of working with them often falls short of this promise. In many settings, users must diagnose misunderstandings, reconstruct missing assumptions, and repeatedly repair misaligned responses. This poster introduces a conceptual framework for understanding why such collaboration remains fragile. Drawing on a constructivist grounded theory analysis of 16 interviews with designers, developers, and applied AI practitioners working on LLM-enabled systems, and informed by literature on human-AI collaboration, we argue that stable collaboration depends not only on model capability but on the interaction's grounding conditions. We distinguish three recurrent structures of human-AI work: one-shot assistance, weak collaboration with asymmetric repair, and grounded collaboration. We propose that collaboration breaks down when the appearance of partnership outpaces the grounding capacity of the interaction and contribute a framework for discussing grounding, repair, and interaction structure in LLM-enabled work.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.