ArXiv TLDR

SymptomAI: Towards a Conversational AI Agent for Everyday Symptom Assessment

🐦 Tweet
2605.04012

Joseph Breda, Fadi Yousif, Beszel Hawkins, Marinela Cotoi, Miao Liu + 28 more

cs.AI

TLDR

SymptomAI, a conversational AI, accurately assesses everyday symptoms and performs differential diagnoses better than clinicians in a large-scale study.

Key contributions

  • SymptomAI, a conversational AI agent, was deployed via Fitbit to 13,917 participants for real-world symptom assessment.
  • SymptomAI's differential diagnoses were 2.47x more accurate than independent clinicians in a blinded comparison.
  • Dedicated symptom interviews significantly outperformed user-guided conversations for diagnosis accuracy.
  • Analyzed 500k+ days of wearable data, finding strong links between acute infections and physiological shifts.

Why it matters

This paper demonstrates conversational AI's potential for accurate everyday symptom assessment, outperforming clinicians. It highlights the importance of dedicated symptom interviews over user-guided discussions, a common LLM default. This could significantly improve early diagnosis and proactive health monitoring.

Original Abstract

Language models excel at diagnostic assessments on currated medical case-studies and vignettes, performing on par with, or better than, clinical professionals. However, existing studies focus on complex scenarios with rich context making it difficult to draw conclusions about how these systems perform for patients reporting symptoms in everyday life. We deployed SymptomAI, a set of conversational AI agents for end-to-end patient interviewing and differential diagnosis (DDx), via the Fitbit app in a study that randomized participants (N=13,917) to interact with five AI agents. This corpus captures diverse communication and a realistic distribution of illnesses from a real world population. A subset of 1,228 participants reported a clinician-provided diagnosis, and 517 of these were further evaluated by a panel of clinicians during over 250 hours of annotation. SymptomAI DDx were significantly more accurate (OR = 2.47, p < 0.001) than those from independent clinicians given the same dialogue in a blinded randomized comparison. Moreover, agentic strategies which conduct a dedicated symptom interview that elicit additional symptom information before providing a diagnosis, perform substantially better than baseline, user-guided conversations (p < 0.001). An auxiliary analysis on 1,509 conversations from a general US population panel validated that these results generalize beyond wearable device users. We used SymptomAI diagnoses as labels for all 13,917 participants to analyze over 500,000 days of wearable metrics across nearly 400 unique conditions. We identified strong associations between acute infections and physiological shifts (e.g., OR > 7 for influenza). While limited by self-reported ground truth, these results demonstrate the benefits of a dedicated and complete symptom interview compared to a user-guided symptom discussion, which is the default of most consumer LLMs.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.