ArXiv TLDR

Urban-ImageNet: A Large-Scale Multi-Modal Dataset and Evaluation Framework for Urban Space Perception

🐦 Tweet
2605.09936

Yiwei Ou, Chung Ching Cheung, Jun Yang Ang, Xiaobin Ren, Ronggui Sun + 3 more

cs.CVcs.IRcs.LG

TLDR

Urban-ImageNet is a new 2M+ multi-modal dataset and benchmark for evaluating AI's perception of urban spaces using social media imagery.

Key contributions

  • Contains over 2M social media images and paired text posts from 61 urban sites.
  • Organized by HUSIC, a 10-class hierarchical taxonomy grounded in urban theory.
  • Evaluates models on urban scene classification, cross-modal retrieval, and instance segmentation.
  • Designed to capture spatial, social, and functional distinctions crucial for urban studies.

Why it matters

This dataset provides a much-needed, theory-grounded benchmark for urban studies, moving beyond generic scene data. It enables AI systems to be evaluated on their ability to interpret complex urban environments across modalities and tasks. This advances research in machine perception of real-world urban dynamics.

Original Abstract

We present Urban-ImageNet, a large-scale multi-modal dataset and evaluation benchmark for urban space perception from user-generated social media imagery. The corpus contains over 2 Million public social media images and paired textual posts collected from Weibo across 61 urban sites in 24 Chinese cities across 2019-2025, with controlled benchmark subsets at 1K, 10K, and 100K scale and a full 2M corpus for large-scale training and evaluation. Urban-ImageNet is organized by HUSIC, a Hierarchical Urban Space Image Classification framework that defines a 10-class taxonomy grounded in urban theory. The taxonomy is designed to distinguish activated and non-activated public spaces, exterior and interior urban environments, accommodation spaces, consumption content, portraits, and non-spatial social-media content. Rather than treating urban imagery as generic scene data, Urban-ImageNet evaluates whether machine perception models can capture spatial, social, and functional distinctions that are central to urban studies. The benchmark supports three tasks within one standardized library: (T1) urban scene semantic classification, (T2) cross-modal image-text retrieval, and (T3) instance segmentation. Our experiments evaluate representative vision, vision-language, and segmentation models, revealing strong performance on supervised scene classification but more challenging behavior in cross-modal retrieval and instance-level urban object segmentation. A multi-scale study further examines how model performance changes as balanced training data increases from 1K, 10K to 100K images. Urban-ImageNet provides a unified, theory-grounded, multi-city benchmark for evaluating how AI systems perceive and interpret contemporary urban spaces across modalities, scales, and task formulations. Dataset and benchmark are available at: huggingface.co/datasets/Yiwei-Ou/Urban-ImageNet and github.com/yiasun/dataset-2.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.