SIAM: Head and Brain MRI Segmentation from Few High-Quality Templates via Synthetic Training
Romain Valabregue, Ines Khemir, Eric Badinet, François Rousseau, Guillaume Auzias + 1 more
TLDR
SIAM is a 3D whole-head MRI segmentation model trained on only six templates, outperforming SOTA for brain and non-brain structures.
Key contributions
- Segments 16 brain and extra-cerebral structures using only six high-quality, manually annotated templates.
- Extends domain randomization to intensity and shape for robust contrast and anatomical variability.
- Matches or outperforms state-of-the-art methods for brain structures across diverse datasets and contrasts.
- Enables fully automated, preprocessing-free analysis by segmenting both brain and non-brain tissues.
Why it matters
This paper introduces SIAM, a novel approach that significantly reduces the need for extensive labeled data in brain MRI segmentation. It overcomes biases of prior methods by using only a few high-quality templates and expands segmentation to the entire head. This advancement streamlines medical image analysis, making it more accessible and robust across diverse imaging modalities.
Original Abstract
Synthetic training has recently advanced brain MRI segmentation by enabling contrast-agnostic models trained entirely on generated data. However, most existing approaches rely on hundreds of automatically labeled templates, introducing systematic biases and limiting their flexibility to incorporate new anatomical structures. We present the Segment It All Model (SIAM), a 3D whole-head segmentation framework for 16 anatomical structures, trained using only six high-quality, manually annotated templates. SIAM extends domain randomization to both intensity and shape domains: synthetic image generation ensures contrast variability, while high-resolution spatial transformations model anatomical differences in cortical thickness and deep nuclei morphology. Unlike prior synthetic models, SIAM simultaneously segments brain as well as extra-cerebral tissues, including cerebrospinal fluid, vessels, dura mater, skull, and skin, enabling fully automated, preprocessing-free analysis. Evaluation across eight heterogeneous datasets (N=301), that include multiple contrasts (T1-weighted, T2-weighted, CT) and span a wide range of ages, demonstrates that SIAM matches or outperforms state-of-the-art methods for brain structures, in addition to extending automated segmentation to non-brain structures. The model also exhibits superior consistency across contrasts and repeated acquisitions, together with improved sensitivity to subtle gray matter atrophy. We openly release the model and the label templates at https://github.com/romainVala/SIAM.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.