ArXiv TLDR

What Did They Mean? How LLMs Resolve Ambiguous Social Situations across Perspectives and Roles

🐦 Tweet
2604.23942

Qiming Yuan, Linyi Han, Nam Ling, Cihan Ruan

cs.HCcs.AI

TLDR

LLMs tend to resolve ambiguous social situations into coherent narratives rather than preserving uncertainty, influenced by narrative perspective.

Key contributions

  • LLMs rarely preserve uncertainty (12.5%) when interpreting ambiguous social situations.
  • They achieve "interpretive closure" via narrative alignment, reversal, or normative advice.
  • Narrator perspective (1st vs. 3rd person) shapes how LLMs resolve ambiguity.
  • LLMs risk prematurely settling unresolved social situations, not just misinterpreting them.

Why it matters

This paper reveals a critical tendency of LLMs to resolve, rather than preserve, ambiguity in social situations. This behavior, influenced by narrative perspective, poses a risk of prematurely settling complex human interactions. It highlights a key design challenge for developing more nuanced and uncertainty-preserving social AI.

Original Abstract

People increasingly turn to large language models (LLMs) to interpret ambiguous social situations: a delayed text reply, an unusually cold supervisor, a teacher's mixed signals, or a boundary-crossing friend. Yet in many such cases, no stable interpretation can be verified from the available evidence alone. We study how LLMs respond to these situations across four domains: early-stage romantic relationships, teacher--student dynamics, workplace hierarchies, and ambiguous friendships. Across 72 responses from GPT, Claude, and Gemini, only 9 (12.5\%) genuinely preserved uncertainty. The remaining 87.5% produced interpretive closure through recurring pathways including narrative alignment, narrative reversal, normative advice under uncertainty, and hedged language that still supported a single conclusion. We further find that narrator perspective shapes the path to closure: first-person accounts more often elicited alignment, while third-person accounts invited more detached interpretation, even when the underlying situation remained comparable. Together, these findings show that LLMs do not simply assist interpersonal sensemaking; they tend to resolve ambiguity into coherent and actionable narratives. These results suggest that the central risk is not only that LLMs may misinterpret social situations, but that they may make unresolved situations feel prematurely settled. We frame this tendency as a design challenge for uncertainty-preserving social AI.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.