Self-supervised Pretraining of Cell Segmentation Models
Kaden Stillwagon, Alexandra Dunnum VandeLoo, Benjamin Magondu, Craig R. Forest
TLDR
DINOCell improves cell segmentation by using self-supervised pretraining on unlabeled microscopy images, outperforming SAM-based models.
Key contributions
- Proposes DINOCell, a self-supervised framework for cell instance segmentation.
- Adapts DINOv2 representations to microscopy via continued self-supervised training.
- Achieves 10.42% higher SEG score on LIVECell than leading SAM-based models.
- Demonstrates strong zero-shot performance on out-of-distribution microscopy datasets.
Why it matters
Accurate cell segmentation is crucial for biological analysis but hindered by limited labeled data. This paper introduces DINOCell, which effectively addresses domain shift by adapting self-supervised models to microscopy data. This leads to significantly improved performance and robustness, advancing cell image analysis.
Original Abstract
Instance segmentation enables the analysis of spatial and temporal properties of cells in microscopy images by identifying the pixels belonging to each cell. However, progress is constrained by the scarcity of high-quality labeled microscopy datasets. Many recent approaches address this challenge by initializing models with segmentation-pretrained weights from large-scale natural-image models such as Segment Anything Model (SAM). However, representations learned from natural images often encode objectness and texture priors that are poorly aligned with microscopy data, leading to degraded performance under domain shift. We propose DINOCell, a self-supervised framework for cell instance segmentation that leverages representations from DINOv2 and adapts them to microscopy through continued self-supervised training on unlabeled cell images prior to supervised fine-tuning. On the LIVECell benchmark, DINOCell achieves a SEG score of 0.784, improving by 10.42% over leading SAM-based models, and demonstrates strong zero-shot performance on three out-of-distribution microscopy datasets. These results highlight the benefits of domain-adapted self-supervised pretraining for robust cell segmentation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.