New AI-Driven Tools for Enhancing Campus Well-being: A Prevention and Intervention Approach
TLDR
AI tools like conversational chatbots and multi-model reasoning enhance campus well-being by improving feedback collection and mental health risk detection.
Key contributions
- Developed TigerGPT, an LLM-powered chatbot for personalized, context-aware student feedback.
- Introduced AURA, an RL framework that adaptively improves survey question quality and engagement.
- Demonstrated BERT's ability to detect nuanced mental health features in expressive narratives.
- Created PsychoGPT with SMMR for explainable, multi-layered mental distress classification.
Why it matters
This paper is crucial for addressing gaps in campus well-being monitoring and mental health support. It offers practical AI solutions for both proactive feedback collection and accurate, explainable mental health risk detection. This integrated approach provides universities powerful new tools to enhance student success.
Original Abstract
Campus well-being underpins academic success, yet many universities lack effective methods for monitoring satisfaction and detecting mental health risks. This dissertation addresses these gaps through prevention (improving feedback collection) and intervention (advancing mental health detection), unified under an integrated framework. For prevention, we developed TigerGPT, a personalized survey chatbot leveraging LLMs to engage users in context-aware conversations grounded in conversational design and engagement theory, achieving 75% usability and 81% satisfaction. To address its limitations in repetitiveness and response depth, we introduced AURA, a reinforcement-learning framework that adapts follow-up question types (validate, specify, reflect, probe) within a session using an LSDE quality signal (Length, Self-disclosure, Emotion, Specificity), initialized from 96 prior conversations. AURA achieved +0.12 mean quality gain (p=0.044, d=0.66), with 63% fewer specification prompts and 10x more validation behavior. For intervention, we examine Expressive Narrative Stories (ENS) for mental health screening, showing BERT(128) captures nuanced linguistic features without keyword cues, while conventional classifiers depend heavily on explicit mental health terms. We then developed PsychoGPT, an LLM built on DSM-5 and PHQ-8 guidelines that performs initial distress classification, symptom-level scoring, and reconciliation with external ratings for explainable assessment. To reduce hallucinations, we proposed Stacked Multi-Model Reasoning (SMMR), layering expert models where early layers handle localized subtasks and later layers reconcile findings, outperforming single-model solutions on DAIC-WOZ in accuracy, F1, and PHQ-8 scoring. Finally, a cohesive framework unifies these tools, enabling adaptive survey insights to flow directly into specialized mental health detection models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.