ArXiv TLDR

Discerning Authorship in Online Health Communities: Experience, Trust, and Transparency Implications for Moderating AI

🐦 Tweet
2604.19429

Yefim Shulman, Agnieszka Kitkowska, Mark Warner

cs.HCcs.CY

TLDR

Users in online health communities struggle to discern AI-generated from human health advice, emphasizing the critical need for transparency and trust.

Key contributions

  • Examined user ability to distinguish AI from human health advice in online communities.
  • Found users have very limited ability to discern AI-generated health advice.
  • Identified unreliable signals users employ, leading to flawed advice evaluations.
  • Highlights critical need for transparency and trust in AI use within health communities.

Why it matters

The proliferation of LLMs generating health advice threatens trust in online communities. This research reveals users' inability to discern AI authorship, underscoring the vital role of transparency. It provides a foundation for improving AI self-moderation and community-based AI governance.

Original Abstract

For online health communities, community trust is paramount. Yet, advances in Large Language Models (LLMs) generating advice may erode this trust, especially if users cannot identify whether LLMs have been used. We investigate the feasibility of community-based detection of health advice authorship and how self-moderation of LLMs could help enhance advice utilization. In an online experiment, we evaluate people's ability to distinguish AI-generated from human-written advice across two health conditions, considering lived experience with a condition, AI-recognition training, and user attitudes towards transparency and trust around AI use. Our results indicate the need for transparency coupled with trust. We find little evidence of people's ability to discern advice authorship. However, we find a consistent effect of the health condition. Our qualitative findings identify unreliable signals, resulting in flawed heuristic evaluations of the advice. Our findings point to opportunities to improve the self-moderation of LLM-based AI and aid community-based AI moderation.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.