Why Do Vision Language Models Struggle To Recognize Human Emotions?
Madhav Agarwal, Sotirios A. Tsaftaris, Laura Sevilla-Lara, Steven McDonagh
TLDR
VLMs struggle with emotion recognition due to long-tailed data and poor temporal understanding; this paper identifies these issues and proposes solutions.
Key contributions
- Identifies that VLMs struggle with emotion recognition due to long-tailed datasets, collapsing rare emotions.
- Reveals VLMs' sparse temporal sampling misses critical, fleeting micro-expressions for emotion understanding.
- Proposes alternative sampling strategies to address head-class bias in emotion datasets.
- Introduces a multi-stage context enrichment strategy using natural language summaries of "in-between" frames.
Why it matters
This paper is crucial for understanding VLM limitations in human-computer interaction. It pinpoints key architectural and data-related flaws in current VLMs regarding emotion recognition. The proposed solutions offer pathways to develop more emotionally intelligent AI systems.
Original Abstract
Understanding emotions is a fundamental ability for intelligent systems to be able to interact with humans. Vision-language models (VLMs) have made tremendous progress in the last few years for many visual tasks, potentially offering a promising solution for understanding emotions. However, it is surprising that even the most sophisticated contemporary VLMs struggle to recognize human emotions or to outperform even specialized vision-only classifiers. In this paper we ask the question "Why do VLMs struggle to recognize human emotions?", and observe that the inherently continuous and dynamic task of facial expression recognition (DFER) exposes two critical VLM vulnerabilities. First, emotion datasets are naturally long-tailed, and the web-scale data used to pre-train VLMs exacerbates this head-class bias, causing them to systematically collapse rare, under-represented emotions into common categories. We propose alternative sampling strategies that prevent favoring common concepts. Second, temporal information is critical for understanding emotions. However, VLMs are unable to represent temporal information over dense frame sequences, as they are limited by context size and the number of tokens that can fit in memory, which poses a clear challenge for emotion recognition. We demonstrate that the sparse temporal sampling strategy used in VLMs is inherently misaligned with the fleeting nature of micro-expressions (0.25-0.5 seconds), which are often the most critical affective signal. As a diagnostic probe, we propose a multi-stage context enrichment strategy that utilizes the information from "in-between" frames by first converting them into natural language summaries. This enriched textual context is provided as input to the VLM alongside sparse keyframes, preventing attentional dilution from excessive visual data while preserving the emotional trajectory.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.