Priming, Path-dependence, and Plasticity: Understanding the molding of user-LLM interaction and its implications from (many) chat logs in the wild
Shengqi Zhu, Jeffrey M. Rzeszotarski, David Mimno
TLDR
This paper analyzes 140K LLM chat logs, revealing that user interaction patterns stabilize rapidly, leading to less exploration despite open input.
Key contributions
- User interaction patterns with LLMs stabilize rapidly based on early individual trajectories.
- Early user exploration strongly correlates with long-term outcomes like retention and recurring text patterns.
- Parallel dynamics include task-specific expressions (e.g., emotional support) and responses to model updates.
- Identifies an "agency paradox": users explore less despite unconstrained and user-driven LLM input spaces.
Why it matters
This paper offers critical insights into real-world user-LLM interaction dynamics, which in-lab studies often miss. It reveals how early experiences quickly mold user behavior, leading to an "agency paradox" where users explore less despite open input. This understanding is vital for designing more effective and adaptive LLM systems.
Original Abstract
User interactions with LLMs are shaped by prior experiences and individual exploration, but in-lab studies do not provide system designers with visibility into these in-the-wild factors. This work explores a new approach to studying real-world user-LLM interactions through large-scale chat logs from the wild. Through analysis of 140K chatbot sessions from 7,955 anonymized global users over time, we demonstrate key patterns in user expressions despite varied tasks: (1) LLM users are not tabula rasa, nor are they constantly adapting; rather, interaction patterns form and stabilize rapidly through individual early trajectories; (2) Longitudinal outcomes, such as recurring text patterns and retention rates, are strongly correlated with early exploration; (3) Parallel dynamics are present, including organizing expressions by task types such as emotional support, or in response to model-version updates. These results present an ``agency paradox'': despite LLM input spaces being unconstrained and user-driven, we in fact see less user exploration. We call for design consideration surrounding the molding procedure and its incorporation in future research.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.