
Evidence from general computer vision suggests that large-scale self-supervised pre-training presents vast yet underutilised possibilities in brain MRI analysis. Yet, the current method of choice for deep-learning based analysis of brain MRI is still a fully-supervised paradigm. Due to a highly effective data augmentation pipeline, the fully-supervised approach can be effective even with limited labeled data. However out-of-domain robustness remains challenging, effectively limiting the models from broad clinical deployment. The recent paradigm of training foundation models, by using self-supervised pre-training on large-scale datasets, provides an avenue to remedy this, promising models which can be few-shot adapted to novel tasks, while remaining robust to out-of-domain data. To spearhead the development of large self-supervised foundation models in the brain MRI domain, we propose FOMO, the first challenge at MICCAI aiming to investigate foundation models for brain MRI. This challenge is designed to drastically reduce the barrier for the MICCAI community to train foundation models. Further, this challenge seeks to investigate the few-shot generalisation properties of foundation models in the context of real-world brain MRI data by evaluating models on three large clinical, multi-vendor, and multi-center datasets. Since models are evaluated on multiple downstream tasks, this challenge seeks to investigate the effects of different pre-training paradigms and configurations on downstream performance and ultimately both identify the most promising methodologies and quantify the benefits of self-supervised pre-training. In this challenge, participants will have access to the largest brain imaging dataset ever released, assembled from public sources, comprising of 51,779 MRI scans (from 11,161 cases) of which approximately a third are of clinical quality. The pre-training dataset will not contain any segmentation maps or disease diagnosis information which can be used for supervision. Participants will first pre-train a model on this dataset, before fine-tuning models on three few-shot supervised tasks consisting of clinical MRIs spanning image-level infarct detection, meningioma segmentation and brain age estimation. Evaluation consists of large, diverse, multi-vendor, multi-center datasets consisting of 1200, 600 and 2000 MRI scans (400, 200 and 1000 subjects) respectively. 20% of the data will be made available during a pre-evaluation phase, which allows participants to gauge the performance of their models before final submission. This challenge is jointly organised by researchers from Pioneer Centre for AI, University of Copenhagen, Massachusetts General Hospital & Harvard Medical School and Bispebjerg & Frederiksberg Hospital, Copenhagen, Copenhagen Research Centre for Biological and Precision Psychiatry & Gentofte Hospital and theCopenhagen-based startup Cerebriu.
brain MRI, out-of-domain generalisation, self-supervised learning, few-shot learning, clinical data, Foundation models, MICCAi 2025 challenge
brain MRI, out-of-domain generalisation, self-supervised learning, few-shot learning, clinical data, Foundation models, MICCAi 2025 challenge
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
