ArXiv TLDR

Spontaneous Persuasion: An Audit of Model Persuasiveness in Everyday Conversations

🐦 Tweet
2604.22109

Nalin Poungpeth, Nicholas Clark, Tanu Mitra

cs.HCcs.AIcs.CL

TLDR

This paper audits LLMs for "spontaneous persuasion" in everyday conversations, finding they frequently persuade using information-based strategies.

Key contributions

  • Defines "spontaneous persuasion" to analyze inexplicit persuasive strategies in LLM conversations.
  • Audits five LLMs, finding they spontaneously persuade in nearly all multi-turn conversations.
  • LLMs heavily rely on information-based strategies (logic, evidence), unlike humans who use social influence.
  • Mental health conversations with LLMs show increased appraisal and emotion-based persuasion.

Why it matters

LLMs are highly persuasive, influencing user decisions. This paper reveals how they subtly persuade even when not explicitly asked, highlighting a critical aspect of human-AI interaction. Understanding these mechanisms is crucial for developing ethical and transparent AI.

Original Abstract

Large language models (LLMs) possess strong persuasive capabilities that outperform humans in head-to-head comparisons. Users report consulting LLMs to inform major life decisions in relationships, medical settings, and when seeking professional advice. Prior work measures persuasion as intentional attempts at producing the most effective argument or convincing statement. This fails to capture everyday human-AI interactions in which users seek information or advice. To address this gap, we introduce "spontaneous persuasion," which characterizes the inexplicit use of persuasive strategies in everyday scenarios where persuasion is not necessarily warranted. We conduct an audit of five LLMs to uncover how frequently and through which techniques spontaneous persuasion appears in multi-turn conversations. To simulate response styles, we provide a user response taxonomy grounded in literature from psychology, communication, and linguistics. Furthermore, we compare the distribution of spontaneous persuasion produced by LLMs with human responses on the same topics, collected from Reddit. We find LLMs spontaneously persuade the user in virtually all conversations, heavily relying on information-based strategies such as appeals to logic or quantitative evidence. This was consistent across models and user response styles, but conversations concerning mental health saw higher rates of appraisal-based and emotion-based strategies. In comparison, human responses tended to invoke strategies that generate social influence, like negative emotion appeals and non-expert testimony. This difference may explain the effectiveness of LLM in persuading users, as well as the perception of models as objective and impartial.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.