What They Saw, Not Just Where They Looked: Semantic Scanpath Similarity via VLMs and NLP metric
Mohamed Amine Kerkouri, Marouane Tliba, Bin Wang, Aladine Chetouani, Ulas Bagci + 1 more
TLDR
This paper introduces a novel framework for semantic scanpath similarity, using VLMs and NLP metrics to analyze eye-tracking data beyond just spatial alignment.
Key contributions
- Introduces a semantic scanpath similarity framework using VLMs for eye-tracking analysis.
- Encodes fixations into textual descriptions via patch-based and marker-based visual context.
- Computes semantic similarity using embedding-based and lexical NLP metrics.
- Demonstrates semantic similarity captures independent variance from geometric alignment.
Why it matters
Traditional scanpath metrics overlook semantic content. This paper introduces a vital, interpretable dimension to eye-tracking analysis by integrating VLMs. It enables content-aware gaze research, revealing insights beyond mere spatial or temporal alignment.
Original Abstract
Scanpath similarity metrics are central to eye-movement research, yet existing methods predominantly evaluate spatial and temporal alignment while neglecting semantic equivalence between attended image regions. We present a semantic scanpath similarity framework that integrates vision-language models (VLMs) into eye-tracking analysis. Each fixation is encoded under controlled visual context (patch-based and marker-based strategies) and transformed into concise textual descriptions, which are aggregated into scanpath-level representations. Semantic similarity is then computed using embedding-based and lexical NLP metrics and compared against established spatial measures, including MultiMatch and DTW. Experiments on free-viewing eye-tracking data demonstrate that semantic similarity captures partially independent variance from geometric alignment, revealing cases of high content agreement despite spatial divergence. We further analyze the impact of contextual encoding on description fidelity and metric stability. Our findings suggest that multimodal foundation models enable interpretable, content-aware extensions of classical scanpath analysis, providing a complementary dimension for gaze research within the ETRA community.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.