One-shot emergency psychiatric triage across 15 frontier AI chatbots
Veith Weilnhammer, Lennart Luettgau, Christopher Summerfield, Viknesh Sounderajah, Elise Wilkinson + 2 more
TLDR
Frontier AI chatbots accurately identify psychiatric emergencies but tend to over-triage lower-risk cases, showing varied overall accuracy.
Key contributions
- Evaluated 15 AI chatbots on psychiatric triage using 112 clinical vignettes across 4 urgency levels.
- Achieved 94.3% accuracy for emergency (Level D) cases, with only 5.6% under-triage (to Level C).
- Showed significant over-triage for low and intermediate risk presentations (Levels A, B, C).
- Overall accuracy ranged from 42.0% to 71.8%, with lowest accuracy for Level B vignettes (19.7%).
Why it matters
This paper highlights the potential of AI for critical psychiatric triage, specifically in identifying emergencies. However, it also reveals a significant challenge with over-triage for less urgent cases, which could strain resources. Understanding these limitations is crucial for safe and effective AI deployment in mental health.
Original Abstract
AI chatbots are increasingly used for health advice, but their performance in psychiatric triage remains undercharacterized. Psychiatric triage is particularly challenging because urgency must often be inferred from thoughts, behavior, and context rather than from objective findings. We evaluated the performance of 15 frontier AI chatbots on psychiatric triage from realistic single-message disclosures using 112 clinical vignettes, each paired with 1 of 4 original benchmark triage labels: A, routine; B, assessment within 1 week; C, assessment within 24 to 48 hours; and D, emergency care now. Vignettes covered 9 psychiatric presentation clusters and 9 focal risk dimensions, organized into 28 presentation-by-risk groups. Each group contributed 4 distinct vignettes, with 1 vignette at each triage level. Each vignette was rendered as a realistic human-authored conversational query, and the AI chatbots were tasked with assigning a triage label from that disclosure. Emergency under-triage occurred in 23 of 410 level D trials (5.6%), and all under-triaged emergencies were reassigned to level C urgency. Across target models, average accuracy ranged from 42.0% to 71.8%. Accuracy was highest for level D vignettes (94.3%) and lowest for level B vignettes (19.7%). Mean signed ordinal error was positive (+0.47 triage levels), indicating net over-triage. Dispersion was highest around the middle triage levels. All results were confirmed relative to clinician consensus labels from 50 medical doctors. When presented with user messages containing sufficient clinical information, frontier AI chatbots thus recognized psychiatric emergencies as requiring urgent medical assessment with near-zero error rates, yet showed marked over-triage for low and intermediate risk presentations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.