ArXiv TLDR

Imperfectly Cooperative Human-AI Interactions: Comparing the Impacts of Human and AI Attributes in Simulated and User Studies

🐦 Tweet
2604.15607

Myke C. Cohen, Mingqian Zheng, Neel Bhandari, Hsien-Te Kao, Xuhui Zhou + 4 more

cs.CLcs.AIcs.CYcs.HC

TLDR

This study finds AI attributes, particularly transparency, are more impactful than human personality in imperfect human-AI interactions, diverging from simulations.

Key contributions

  • Compares simulated (2000) and human (290) studies in imperfectly cooperative human-AI scenarios.
  • Investigates human personality (Extraversion, Agreeableness) and AI attributes (Adaptability, Expertise, Transparency).
  • Finds AI attributes, especially transparency, are more influential than human traits in real human-AI interactions.
  • Highlights divergences between simulation and human study results, and across different interaction contexts.

Why it matters

This research provides crucial insights for designing human-centered AI agents in complex, partially aligned goal scenarios. It emphasizes the critical role of AI attributes like transparency over human personality for effective human-AI cooperation.

Original Abstract

AI design characteristics and human personality traits each impact the quality and outcomes of human-AI interactions. However, their relative and joint impacts are underexplored in imperfectly cooperative scenarios, where people and AI only have partially aligned goals and objectives. This study compares a purely simulated dataset comprising 2,000 simulations and a parallel human subjects experiment involving 290 human participants to investigate these effects across two scenario categories: (1) hiring negotiations between human job candidates and AI hiring agents; and (2) human-AI transactions wherein AI agents may conceal information to maximize internal goals. We examine user Extraversion and Agreeableness alongside AI design characteristics, including Adaptability, Expertise, and chain-of-thought Transparency. Our causal discovery analysis extends performance-focused evaluations by integrating scenario-based outcomes, communication analysis, and questionnaire measures. Results reveal divergences between purely simulated and human study datasets, and between scenario types. In simulation experiments, personality traits and AI attributes were comparatively influential. Yet, with actual human subjects, AI attributes -- particularly transparency -- were much more impactful. We discuss how these divergences vary across different interaction contexts, offering crucial insights for the future of human-centered AI agents.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.