How Do LLMs and VLMs Understand Viewpoint Rotation Without Vision? An Interpretability Study
Zhen Yang, Ping Jian, Zhongbin Guo, Zuming Zhang, Chengzhi Li + 3 more
TLDR
This paper investigates how LLMs and VLMs understand viewpoint rotation from text, finding they struggle to bind viewpoint to observation, but selective fine-tuning helps.
Key contributions
- Introduces a text-only dataset for Viewpoint Rotation Understanding (VRU) to evaluate LLMs and VLMs.
- Reveals LLMs/VLMs perform poorly on VRU, struggling to bind viewpoint position with observations.
- Uses probing and causal intervention to identify attention heads responsible for VRU failures.
- Proposes selective fine-tuning of key heads, improving VRU performance without catastrophic forgetting.
Why it matters
Spatial intelligence is crucial, but current models lack text-only viewpoint rotation understanding. This work highlights a significant gap in LLM/VLM spatial reasoning, revealing interpretability insights into their failures. The proposed selective fine-tuning offers a promising path to enhance spatial understanding without sacrificing general abilities.
Original Abstract
Over the past year, spatial intelligence has drawn increasing attention. Many prior works study it from the perspective of visual-spatial intelligence, where models have access to visuospatial information from visual inputs. However, in the absence of visual information, whether linguistic intelligence alone is sufficient to endow models with spatial intelligence, and how models perform relevant tasks with text-only inputs still remain unexplored. Therefore, in this paper, we focus on a fundamental and critical capability in spatial intelligence from a linguistic perspective: viewpoint rotation understanding (VRU). Specifically, LLMs and VLMs are asked to infer their final viewpoint and predict the corresponding observation in an environment given textual description of viewpoint rotation and observation over multiple steps. We find that both LLMs and VLMs perform poorly on our proposed dataset while human can easily achieve 100% accuracy, indicating a substantial gap between current model capabilities and the requirements of spatial intelligence. To uncover the underlying mechanisms, we conduct a layer-wise probing analysis and head-wise causal intervention. Our findings reveal that although models encode viewpoint information in the hidden states, they appear to struggle to bind the viewpoint position with corresponding observation, resulting in a hallucination in final layers. Finally, we selectively fine-tune the key attention heads identified by causal intervention to improve VRU performance. Experimental results demonstrate that such selective fine-tuning achieves improved VRU performance while avoiding catastrophic forgetting of generic abilities. Our dataset and code will be released at https://github.com/Young-Zhen/VRU_Interpret .
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.