ArXiv TLDR

T-REN: Learning Text-Aligned Region Tokens Improves Dense Vision-Language Alignment and Scalability

🐦 Tweet
2604.18573

Savya Khosla, Sethuraman T, Aryan Chadha, Alex Schwing, Derek Hoiem

cs.CV

TLDR

T-REN improves vision-language alignment and scalability by mapping visual data to compact, text-aligned region tokens, significantly boosting performance.

Key contributions

  • Proposes T-REN, an efficient encoder that maps visual data to compact, text-aligned region tokens.
  • Achieves stronger dense cross-modal understanding with only 3.7% additional parameters.
  • Reduces visual token counts by 24x for images and 187x for videos, improving scalability.
  • Delivers significant gains: +5.9 mIoU on segmentation and +15.6% recall on video localization.

Why it matters

This paper tackles critical limitations in vision-language encoders: weak alignment and high token counts. T-REN offers an efficient solution, significantly boosting performance across dense vision-language tasks. Its scalability improvements make advanced models more practical for real-world applications, especially with long videos.

Original Abstract

Despite recent progress, vision-language encoders struggle with two core limitations: (1) weak alignment between language and dense vision features, which hurts tasks like open-vocabulary semantic segmentation; and (2) high token counts for fine-grained visual representations, which limits scalability to long videos. This work addresses both limitations. We propose T-REN (Text-aligned Region Encoder Network), an efficient encoder that maps visual data to a compact set of text-aligned region-level representations (or region tokens). T-REN achieves this through a lightweight network added on top of a frozen vision backbone, trained to pool patch-level representations within each semantic region into region tokens and align them with region-level text annotations. With only 3.7% additional parameters compared to the vision-language backbone, this design yields substantially stronger dense cross-modal understanding while reducing the token count by orders of magnitude. Specifically, T-REN delivers +5.9 mIoU on ADE20K open-vocabulary segmentation, +18.4% recall on COCO object-level text-image retrieval, +15.6% recall on Ego4D video object localization, and +17.6% mIoU on VSPW video scene parsing, all while reducing token counts by more than 24x for images and 187x for videos compared to the patch-based vision-language backbone. The code and model are available at https://github.com/savya08/T-REN.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.