ESsEN: Training Compact Discriminative Vision-Language Transformers in a Low-Resource Setting
Clayton Fields, Casey Kennington
TLDR
ESsEN introduces a compact vision-language model trainable with few resources, outperforming larger models on discriminative tasks.
Key contributions
- Two-tower encoders outperform one-tower models in low-resource settings for discriminative English tasks.
- Integrating CNNs into two-tower transformers creates parameter-efficient vision-language models.
- Cross-modal fusion module design can vary significantly without impacting two-tower encoder results.
- ESsEN is a compact, end-to-end trainable VL model matching larger models with fewer parameters.
Why it matters
This paper makes vision-language modeling more accessible by providing tools and methods for training compact models with limited resources. It addresses the critical need for smaller, efficient models for edge devices and independent robotic platforms, broadening research participation.
Original Abstract
Vision-language modeling is rapidly increasing in popularity with an ever expanding list of available models. In most cases, these vision-language models have parameters in the tens of billions, which is necessary for some needs, but in many cases smaller models are necessary (e.g., on edge devices or independent robotic platforms). Unfortunately, there is little research in producing light-weight models or in training them with small datasets. Inspired by the language learning progression and data sparsity in child development, in this paper, we address both of these goals in a systematic fashion. We show that two-tower encoder models are superior to one-tower encoders in low-resource settings for discriminative English tasks. We show also that incorporating traditional convolutional networks into the two-tower transformer architecture can help produce parameter efficient vision-language models. Finally, we show that the cross-modal fusion module of two-tower encoders can vary significantly in shape and size while producing the same results. In addition, we present ESsEN, a compact vision-language model that can be trained end-to-end with relatively few resources that performs as well on several tasks with only a fraction of the parameters compared to other models. The experimental results and the tools we present here make vision-language modeling more accessible to a wider variety of researchers.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.