ArXiv TLDR

When Are LLM Inferences Acceptable? User Reactions and Control Preferences for Inferred Personal Information

🐦 Tweet
2605.10013

Kyzyl Monteiro, Minjung Park, Alexander Ioffrida, Angelina Sanna, Hao-Ping + 4 more

cs.HCcs.CR

TLDR

Users are more curious than distressed by LLM inferences about personal data, with acceptability depending on context and third-party use.

Key contributions

  • Users reacted with curiosity and interest, not distress, to LLM inferences about their personal data.
  • Discomfort arose when inferences felt misrepresentative or misaligned with expected use.
  • Users were less comfortable with advertisers/third-parties using inferences than platform providers.
  • Acceptability of LLM inferences is governed by context-sensitive norms, not just content.

Why it matters

This paper challenges the assumption that LLM inferences are inherently distressing, showing users are often curious. It highlights the importance of context, accuracy, and data governance in user acceptance of inferred personal information. These findings can guide developers in designing LLM systems that better respect user privacy and preferences.

Original Abstract

Ask ChatGPT about vacation planning, and it may infer your income. Ask it about medication, and it may infer your medical history. Because such inferences can expose more information than users intend to reveal, prior work argues that they are a defining privacy risk of LLM-based systems. Yet prior work has mostly shown that LLMs can make potentially violating inferences, not how users experience those inferences nor what controls users may want governing their use. We built the Reflective Layer, a visualization tool that surfaces example unstated inferences from users' own ChatGPT histories, and used it in a mixed-methods study with 18 regular ChatGPT users evaluating 215 surfaced inferences from their own conversations. Counterintuitively, participants reacted more strongly with curiosity and interest rather than distress and concern. Discomfort arose mainly when inferences felt misrepresentative of the user or misaligned with expected use. Participants were also markedly less comfortable with advertisers and third-party applications using those inferences than with platform providers. These findings suggest that the acceptability of LLM inferences is governed not only by its content, but by context-sensitive norms around how they are generated, retained within the platform, and transmitted beyond it.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.