Brain-DiT: A Universal Multi-state fMRI Foundation Model with Metadata-Conditioned Pretraining
Junfeng Xia, Wenhao Ye, Xuanye Pan, Xinke Shen, Mo Wang + 1 more
TLDR
Brain-DiT is a universal fMRI foundation model using metadata-conditioned diffusion pretraining across diverse brain states for generalized representations.
Key contributions
- Introduces Brain-DiT, a universal fMRI foundation model pretrained on 349K sessions from 24 diverse datasets.
- Employs metadata-conditioned diffusion pretraining with a Diffusion Transformer (DiT).
- Learns multi-scale fMRI representations, capturing both fine-grained and global semantics.
- Demonstrates diffusion pretraining's superiority over reconstruction for fMRI tasks.
Why it matters
Brain-DiT overcomes limitations of prior fMRI models by learning generalized representations across diverse brain states. Its novel diffusion-based pretraining captures multi-scale fMRI dynamics, improving performance on various downstream tasks. This could advance our understanding of brain function and disease.
Original Abstract
Current fMRI foundation models primarily rely on a limited range of brain states and mismatched pretraining tasks, restricting their ability to learn generalized representations across diverse brain states. We present \textit{Brain-DiT}, a universal multi-state fMRI foundation model pretrained on 349,898 sessions from 24 datasets spanning resting, task, naturalistic, disease, and sleep states. Unlike prior fMRI foundation models that rely on masked reconstruction in the raw-signal space or a latent space, \textit{Brain-DiT} adopts metadata-conditioned diffusion pretraining with a Diffusion Transformer (DiT), enabling the model to learn multi-scale representations that capture both fine-grained functional structure and global semantics. Across extensive evaluations and ablations on 7 downstream tasks, we find consistent evidence that diffusion-based generative pretraining is a stronger proxy than reconstruction or alignment, with metadata-conditioned pretraining further improving downstream performance by disentangling intrinsic neural dynamics from population-level variability. We also observe that downstream tasks exhibit distinct preferences for representational scale: ADNI classification benefits more from global semantic representations, whereas age/sex prediction comparatively relies more on fine-grained local structure. Code and parameters of Brain-DiT are available at \href{https://github.com/REDMAO4869/Brain-DiT}{Link}.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.