ArXiv TLDR

From Codebooks to VLMs: Evaluating Automated Visual Discourse Analysis for Climate Change on Social Media

🐦 Tweet
2604.21786

Katharina Prasse, Steffen Jung, Isaac Bravo, Stefanie Walter, Patrick Knab + 2 more

cs.CV

TLDR

This paper evaluates various Vision-Language Models (VLMs) for automated visual discourse analysis of climate change images on social media.

Key contributions

  • Benchmarks six promptable VLMs and 15 zero-shot CLIP-like models on two climate change image datasets.
  • Finds Gemini-3.1-flash-lite outperforms others, with a small gap to moderate open-weight models.
  • Advocates for distributional evaluation, showing VLMs reliably recover population trends despite moderate per-image accuracy.
  • Identifies that chain-of-thought reasoning reduces performance, while annotation-specific prompt design improves it.

Why it matters

This research provides a robust framework for using computer vision to analyze climate change communication on social media. It enables researchers to efficiently study vast image datasets, identifying effective communication strategies at scale.

Original Abstract

Social media platforms have become primary arenas for climate communication, generating millions of images and posts that - if systematically analysed - can reveal which communication strategies mobilise public concern and which fall flat. We aim to facilitate such research by analysing how computer vision methods can be used for social media discourse analysis. This analysis includes application-based taxonomy design, model selection, prompt engineering, and validation. We benchmark six promptable vision-language models and 15 zero-shot CLIP-like models on two datasets from X (formerly Twitter) - a 1,038-image expert-annotated set and a larger corpus of over 1.2 million images, with 50,000 labels manually validated - spanning five annotation dimensions: animal content, climate change consequences, climate action, image setting, and image type. Among the models benchmarked, Gemini-3.1-flash-lite outperforms all others across all super-categories and both datasets, while the gap to open-weight models of moderate size remains relatively small. Beyond instance-level metrics, we advocate for distributional evaluation: VLM predictions can reliably recover population level trends even when per-image accuracy is moderate, making them a viable starting point for discourse analysis at scale. We find that chain-of-thought reasoning reduces rather than improves performance, and that annotation dimension specific prompt design improves performance. We release tweet IDs and labels along with our code at https://github.com/KathPra/Codebooks2VLMs.git.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.