Beyond Semantics: An Evidential Reasoning-Aware Multi-View Learning Framework for Trustworthy Mental Health Prediction
Yucheng Ruan, Ling Huang, Qika Lin, Kai He, Mengling Feng
TLDR
This paper introduces an evidential multi-view learning framework integrating semantic and reasoning info for trustworthy mental health prediction with uncertainty.
Key contributions
- Integrates semantic (encoder) and reasoning (decoder) views for robust mental health prediction.
- Employs evidential learning (Subjective Logic) to explicitly model prediction uncertainty.
- Uses an evidential fusion strategy to balance views and discount unreliable evidence.
- Achieves improved accuracy and trustworthy uncertainty on three real-world datasets.
Why it matters
Current mental health prediction models often lack reliable uncertainty estimates, leading to overconfident and untrustworthy predictions in critical applications. This framework addresses these issues by providing trustworthy uncertainty and interpretable reasoning, making AI suitable for high-stakes mental health assessment.
Original Abstract
Automated mental health prediction using textual data has shown promising results with deep learning and large language models. However, deploying these models in high-stakes real-world settings remains challenging, as existing approaches largely rely on semantic representations and often produce overconfident predictions under ambiguous, noisy, or shifted data. Moreover, most methods lack reliable uncertainty estimation, undermining trust in risk-sensitive mental health applications. To address these limitations, we formulate the task as a multi-view learning problem that integrates semantic information from encoder-only models with higher-level reasoning information from decoder-only models, where reasoning-aware representations and uncertainty modeling are obtained in a trustworthy manner. To ensure reliable fusion, we adopt an evidential learning framework based on Subjective Logic to explicitly model uncertainty and introduce an evidential fusion strategy that balances complementary views while discounting unreliable evidence. Benchmarking on three real-world datasets, Dreaddit, SDCNL, and DepSeverity, reports accuracies of 0.835, 0.731, and 0.751, respectively, demonstrating its potential for reliable mental health prediction. Additional experiments on robustness to noise and case studies for interpretability confirm that our proposed framework not only improves predictive performance but also provides trustworthy uncertainty estimates and human-understandable reasoning signals, making it suitable for risk-sensitive applications in mental health assessment.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.