Low-Data Supervised Adaptation Outperforms Prompting for Cloud Segmentation Under Domain Shift
Harshith Kethavath, Weiming Hu
TLDR
Low-data supervised fine-tuning significantly outperforms complex prompting for adapting vision-language models to cloud segmentation in remote sensing.
Key contributions
- Prompting, even with engineered variants, consistently underperforms zero-shot baselines for cloud segmentation on satellite data.
- Supervised fine-tuning with just 0.1% labeled data (~8 images) surpasses zero-shot performance.
- Using 5-10% labeled data recovers ~85% of the maximum achievable mIoU for the task.
- Full fine-tuning consistently outperforms low-rank adaptation, especially for spectrally ambiguous classes.
Why it matters
This paper challenges the prevailing assumption that domain-specific prompting is effective for adapting vision-language models to specialized imagery like satellite data. It demonstrates that even minimal labeled data for supervised fine-tuning yields superior results compared to extensive prompt engineering. This provides crucial guidance for practitioners, highlighting labeled data as the more effective path for VLM adaptation.
Original Abstract
Adapting vision-language models to remote sensing imagery presents a fundamental challenge: both the visual and linguistic distributions of satellite data lie far outside natural image pretraining corpora. Despite this, prompting remains the dominant deployment paradigm, driven by the assumption that domain-specific language can guide frozen model representations toward specialized tasks. We test this assumption directly on a domain where the mismatch is prominent: cloud segmentation for satellite imagery. Using CLIPSeg on the CloudSEN12+ cloud segmentation benchmark, we evaluate 60 prompt variants spanning simple labels, domain terminology, appearance descriptors, and contextual cues, finding that every variant underperforms the zero-shot baseline (0.255 mIoU), with engineered prompts scoring as low as 0.07 mIoU. No amount of linguistic refinement bridges the gap between CLIP's natural image representations and satellite spectral imagery. In contrast, supervised fine-tuning with just 0.1% labeled data (~8 images) surpasses zero-shot performance overall, and 5-10% data recovers ~85% of maximum achievable mIoU. Full fine-tuning consistently outperforms low-rank adaptation by 0.03-0.09 mIoU, with the largest gaps for spectrally ambiguous classes, and at 0.5 to 1% labeled data, fine-tuning temporarily degrades performance on these classes before recovering, a supervision dip that aggregate mIoU can mask. For practitioners adapting vision-language models to specialized imagery, our results deliver a clear message: labeled data is not the expensive alternative to prompting; it is the worthwhile path.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.