Probing CLIP's Comprehension of 360-Degree Textual and Visual Semantics
Hai Wang, Xiaochen Yang, Mingzhi Dong, Jing-Hao Xue
TLDR
CLIP understands 360-degree textual semantics but struggles with visual invariance under shifts, which can be improved with fine-tuning.
Key contributions
- Introduces "360-degree textual" and "360-degree visual" semantics for panoramic images.
- Proposes novel evaluation methods for CLIP using keyword manipulation and circular shifts.
- Finds CLIP understands 360-degree textual semantics but fails on visual semantics under shifts.
- Presents a LoRA fine-tuning framework to improve CLIP's 360-degree visual comprehension.
Why it matters
Evaluating 360-degree content from text is crucial for immersive world creation. This paper highlights CLIP's limitations in understanding panoramic visual semantics. It offers insights and a fine-tuning approach to adapt CLIP for better 360-degree content evaluation.
Original Abstract
The dream of instantly creating rich 360-degree panoramic worlds from text is rapidly becoming a reality, yet a crucial gap exists in our ability to reliably evaluate their semantic alignment. Contrastive Language-Image Pre-training (CLIP) models, standard AI evaluators, predominantly trained on perspective image-text pairs, face an open question regarding their understanding of the unique characteristics of 360-degree panoramic image-text pairs. This paper addresses this gap by first introducing two concepts: \emph{360-degree textual semantics}, semantic information conveyed by explicit format identifiers, and \emph{360-degree visual semantics}, invariant semantics under horizontal circular shifts. To probe CLIP's comprehension of these semantics, we then propose novel evaluation methodologies using keyword manipulation and horizontal circular shifts of varying magnitudes. Rigorous statistical analyses across popular CLIP configurations reveal that: (1) CLIP models effectively leverage explicit textual identifiers, demonstrating an understanding of 360-degree textual semantics; and (2) CLIP models fail to robustly preserve semantic alignment under horizontal circular shifts, indicating limited comprehension of 360-degree visual semantics. To address this limitation, we propose a LoRA-based fine-tuning framework that explicitly instills invariance to circular shifts. Our fine-tuned models exhibit improved comprehension of 360-degree visual semantics, though with a slight degradation in original semantic evaluation performance, highlighting a fundamental trade-off in adapting CLIP to 360-degree panoramic images. Code is available at https://github.com/littlewhitesea/360Semantics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.