ArXiv TLDR

A systematic evaluation of vision-language models for observational astronomical reasoning tasks

🐦 Tweet
2604.24589

Wenke Ren, Hengxiao Guo, Wenwen Zuo, Xiaoman Zhang

cs.AIastro-ph.GAastro-ph.IM

TLDR

AstroVLBench evaluates vision-language models on diverse astronomical tasks, revealing key gaps in physical grounding and modality handling.

Key contributions

  • Created AstroVLBench with 4,100+ expert-verified instances across 5 astronomical data modalities.
  • Tested 6 top VLMs, finding Gemini 3 Pro most consistent but all lag behind domain-specific methods.
  • Physical grounding and mechanistic prompts improve accuracy and reduce bias more than phenomenological prompts.
  • Direct numerical data input boosts performance by up to 13%, highlighting modality representation issues.

Why it matters

This paper benchmarks VLMs on real astronomical data, exposing critical weaknesses in physical reasoning and modality adaptation. It guides future model improvements for reliable scientific use.

Original Abstract

Vision-language models (VLMs) are increasingly proposed as general-purpose tools for scientific data interpretation, yet their reliability on real astronomical observations across diverse modalities remains untested. We present AstroVLBench, a comprehensive benchmark comprising over 4,100 expert-verified instances across five tasks spanning optical imaging, radio interferometry, multi-wavelength photometry, time-domain light curves, and optical spectroscopy. Evaluating six frontier models, we find that performance is strongly modality-dependent: while one model (Gemini 3 Pro) emerges as the most consistently capable across tasks, task-specific strengths vary, and all models substantially underperform domain-specialized methods. Mechanistic ablations reveal that performance depends not only on directing attention to salient visual features but also on grounding those features in physical knowledge. Phenomenological prompts describing what to look for improve accuracy by sharpening model focus, but physical prompts explaining why those features matter perform better overall and yield more balanced classifications with reduced class-specific bias. Consistent with this picture, presenting the underlying one-dimensional measurements directly as numerical tables instead of rendered plots yields up to 13 percentage points improvement. Reasoning quality analysis further demonstrates that, without explicit physical grounding, models may reach correct predictions from phenomenologically plausible cues while providing physically imprecise justifications, establishing that accuracy alone is insufficient for trustworthy scientific deployment. These findings provide the first systematic, multi-modal baselines for VLMs in observational astronomy and identify the specific representation, grounding, and reasoning bottlenecks where current models fail.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.