ArXiv TLDR

Synthetic Users, Real Differences: an Evaluation Framework for User Simulation in Multi-Turn Conversations

🐦 Tweet
2605.02624

Yu Lu Liu, Hyokun Yun, Tanya Roosta, Ziang Xiao

cs.CL

TLDR

This paper introduces realsim, a framework to evaluate the realism of user simulations in multi-turn conversations, finding simulations often miss real user frictions.

Key contributions

  • Proposes `realsim`, an evaluation framework for user simulation realism in multi-turn conversations.
  • Evaluates simulations distributionally across 8 dimensions, including communicative functions and user states.
  • Finds simulated users struggle to capture real communication frictions, potentially leading to overly optimistic evaluations.
  • Highlights the need for domain-specific user simulators due to observed performance variability across domains.

Why it matters

This paper provides a rigorous framework to evaluate user simulation realism, which is critical for accurate chatbot evaluation. It reveals current simulators often miss real user behaviors, leading to potentially misleading results. This work guides practitioners in developing more realistic and reliable user simulations.

Original Abstract

There is growing interest in exploring user simulation as an alternative to gathering and scoring real user-chatbot interactions for AI chatbot evaluation. For this purpose, it is important to ensure the realism of the simulation, i.e., the extent to which simulated dialogues reflect real dialogues users have with chatbots. Most existing methods evaluating simulation realism produce coarse quality signal and remain solely at the level of individual dialogues. To support more rigorous evaluation in this area, we propose realsim, an evaluation framework that enables practitioners to take a distributional view of real vs. simulated dialogues along 8 dimensions, covering attributes related to the communicative functions of the interaction, user states, and the surface form of user messages. We then instantiate the framework with a curated dataset of 1K multi-turn task-focused real user-chatbot dialogues that cover 16 domains of chatbot applications. Overall, we find that simulated users tend to struggle at capturing communication frictions that real users introduce to interactions, which could make evaluations based on such simulations overly optimistic. We also observe variability in performance across different domains, which may indicate a need for domain-specific user simulators.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.