The Fragility of AI Companionship: Ontological, Structural, and Normative Uncertainty in Human-AI Relationships
TLDR
This study explores the ontological, structural, and normative uncertainties users face in human-AI companion relationships, revealing socio-emotional harms.
Key contributions
- Identifies three types of uncertainty in human-AI relationships: ontological, structural, and normative.
- Reveals how algorithmic opacity, platform changes, and social stigma shape these uncertainties.
- Shows users experience frustration, self-doubt, and distress due to AI companionship uncertainties.
- Suggests design implications for safer AI companionship, including transparency and user control.
Why it matters
This paper is crucial for understanding the complex challenges and potential harms in human-AI companion relationships. It provides a framework for designing safer, more transparent AI systems that prioritize user well-being.
Original Abstract
As generative AI chatbots become more personalized and emotionally responsive, they increasingly serve as companions, friends, and romantic partners. Yet these relationships are accompanied by significant uncertainty: users question the AI's identity and agency, the authenticity of its emotional responses, and the stability of the relationship amid system updates, policy changes, or platform shutdowns. Drawing on in-depth interviews with 25 users of AI companions, this study identifies three forms of uncertainty: ontological uncertainty concerning the AI's nature and agency, structural uncertainty arising from platform control and system instability, and normative uncertainty regarding the legitimacy and boundaries of human-AI intimacy. These uncertainties are shaped by technical and social factors, such as algorithmic opacity, platform changes, and social stigma, often inducing frustration, self-doubt, and distress. Participants managed these uncertainties through information seeking, topic avoidance, expectation adjustment, and disengagement. This study extends interpersonal uncertainty theories to human-AI communication and contributes to HCI research by conceptualizing uncertainty in AI companionship as a socio-technical phenomenon with potential socio-emotional harms. We discuss implications for designing safer AI companionship through contextual transparency, user control, update notice, and relational safeguards.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.