Distilling Vision Transformers for Distortion-Robust Representation Learning
Konstantinos Alexis, Giorgos Giannopoulos, Dimitrios Gunopulos
TLDR
A new method distills Vision Transformers using asymmetric multi-level knowledge transfer to learn distortion-robust representations from distorted data.
Key contributions
- Leverages pretrained Vision Transformers for robust representation learning.
- Proposes asymmetric distillation: teacher sees clean, student sees distorted images.
- Introduces multi-level distillation aligning embeddings, patch features, and attention maps.
- Student learns clean-image representations effectively without direct access to clean data.
Why it matters
This paper addresses the critical challenge of learning robust visual representations when clean data is scarce or unavailable. By distilling knowledge from pretrained models, it significantly improves performance on tasks with distorted inputs, making AI systems more reliable in real-world, imperfect conditions.
Original Abstract
Self-supervised learning has achieved remarkable success in learning visual representations from clean data, yet remains challenging when clean observations are sparse or not available at all. In this paper, we demonstrate that pretrained vision models can be leveraged to learn distortion-robust representations, which can then be effectively applied to downstream tasks operating on distorted observations. In particular, we propose an asymmetric knowledge distillation framework in which both teacher and student are initialized from the same pretrained Vision Transformer but receive different views of each image: the teacher processes clean images, while the student sees their distorted versions. We introduce multi-level distillation that aligns global embeddings, patch-level features, and attention maps and show that the student is able to approximate clean-image representations despite never directly accessing clean data. We evaluate our approach on image classification tasks across several datasets and under various distortions, consistently outperforming existing alternatives for the same amount of human supervision.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.