ViCrop-Det: Spatial Attention Entropy Guided Cropping for Training-Free Small-Object Detection
Hui Wang, Hongze Li, Wei Chen, Xiaojin Zhang
TLDR
ViCrop-Det improves small-object detection by adaptively cropping regions using spatial attention entropy, enhancing transformer performance without retraining.
Key contributions
- Introduces ViCrop-Det, a training-free inference framework for small-object detection.
- Leverages Spatial Attention Entropy (SAE) to adaptively crop and process high-uncertainty regions.
- Recovers fine-grained features and resolves spatial ambiguity without architectural modifications.
- Achieves +1-3 mAP@50 on VisDrone/DOTA-v1.5 with only 20-23% latency overhead.
Why it matters
Transformers often struggle with small objects due to uniform global receptive fields and local feature degradation. ViCrop-Det solves this by adaptively focusing computational resources on critical regions using attention entropy. This offers a practical, efficient way to boost small-object detection in existing models without costly retraining.
Original Abstract
Transformer-based architectures have established a dominant paradigm in global semantic perception; however, they remain fundamentally constrained by the profound spatial heterogeneity inherent in natural images. Specifically, the imposition of a uniform global receptive field across regions of varying information density inevitably leads to local feature degradation, particularly in dense conflict zones populated by microscopic targets. To address this mechanistic limitation, we propose ViCrop-Det, a training-free inference framework that introduces adaptive spatial trust region shrinkage. Inspired by the use of attention entropy in anomaly segmentation, ViCrop-Det leverages the detection decoder's cross-attention distribution as an endogenous probe. By utilizing Spatial Attention Entropy (SAE) to heuristically evaluate local spatial ambiguity, the framework executes dynamic spatial routing, allocating a fixed computational budget exclusively to regions exhibiting both high target saliency and high cognitive uncertainty. By shrinking the spatial trust region and injecting high-frequency localized observations, ViCrop-Det actively resolves spatial ambiguity and recovers fine-grained features without requiring architectural modifications. Extensive evaluations on VisDrone and DOTA-v1.5 demonstrate that ViCrop-Det yields competitive performance enhancements, consistently adding +1-3 mAP@50 to RT-DETR-R50 and Deformable DETR with a marginal 20-23\% latency overhead. On MS COCO, $AP_{S}$ improves while $AP_{M}/AP_{L}$ remains stable, indicating precise fine-scale refinement without compromising the global spatial prior. Under compute-matched settings, our adaptive routing strategy comprehensively surpasses uniform slicing baselines, achieving a highly optimized accuracy-speed trade-off.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.