Downloads provided by UsageCounts
International challenges have become the standard for validation of biomedical image analysis methods. We argue, though, that the actual performance even of the winning algorithms on “real-world” clinical data often remains unclear, as the data included in these challenges are usually acquired in very controlled settings at few institutions. The seemingly obvious solution of just collecting increasingly more data from more institutions in such challenges does not scale well due to privacy and ownership hurdles. Building upon the Federated Tumor Segmentation (FeTS) 2021 challenge, which represents the first challenge to ever be proposed on federated learning, FeTS 2022 intends to address these hurdles, both for the creation and the evaluation of tumor segmentation models. Specifically, the FeTS 2022 challenge will use clinically acquired, multiinstitutional multi-parametric magnetic resonance imaging (mpMRI) scans from the RSNA-ASNR-MICCAI BraTS 2021 challenge, as well as from various remote independent institutions included in the collaborative network of a real-world federation (www.fets.ai). The FeTS 2022 challenge focuses on the construction and evaluation of a consensus model for the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas (and particularly the radiographically appearing glioblastomas). Compared to the BraTS challenge [1-4], the ultimate goal of FeTS is 1) the creation of a consensus segmentation model that has gained knowledge from data of multiple institutions without pooling their data together (i.e., by retaining the data within each institution), and 2) the valuation of segmentation models in such a federated configuration (i.e., in the wild). The FeTS 2022 challenge is structured in two tasks: Task 1 ("Federated Training") aims at effective weight aggregation methods for the creation of a consensus model given a pre-defined segmentation algorithm for training, while also (optionally) accounting for network outages. Task 2 ("Federated Evaluation") aims at robust segmentation algorithms evaluated on unseen datasets with realistic distribution shifts, from various remote and independent institutions of the collaborative network of the fets.ai federation. To prepare for both these tasks, participants can use the information provided on data origin during the training phase of the challenge. The clinical relevance and importance of the FeTS challenge is that it addresses challenges related to privacy, legal, bureaucratic, and ownership concerns, as well as robustness to realistic dataset shifts. Ground truth reference annotations are created and approved by expert neuroradiologists for every subject included in the training, validation, and testing datasets to quantitatively evaluate the performance of the participating algorithms. Participants are free to choose whether they want to focus on only one or multiple tasks. In favor of openness, in this year’s challenge we have partnered with MLCommons, which is a non-profit organization with members ranging across multiple academic and industrial entities. References [1] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: 10.1109/TMI.2014.2377694 [2] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features", Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117 [3] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018) [4] U.Baid, et al., "The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification", arXiv:2107.02314, 2021. [5] R.Cox, J.Ashburner, H.Breman, K.Fissell, C.Haselgrove, C.Holmes, J.Lancaster, D.Rex, S.Smith, J.Woodward, “A (Sort of) new image data format standard: NIfTI-1: WE 150”, Neuroimage, 22, 2004. [6] T. Rohlfing, et al. The SRI24 multichannel atlas of normal adult human brain structure. Hum Brain Mapp. 31(5):798-819, 2010. [7] S.Thakur, J.Doshi, S.Pati, S.Rathore, C.Sako, M.Bilello, S.M.Ha, G.Shukla, A.Flanders, A.Kotrotsou, M.Milchenko, S.Liem, G.S.Alexander, J.Lombardo, J.D.Palmer, P.LaMontagne, A.Nazeri, S.Talbar, U.Kulkarni, D.Marcus, R.Colen, C.Davatzikos, G.Erus, S.Bakas, “Brain Extraction on MRI Scans in Presence of Diffuse Glioma: Multi-institutional Performance Evaluation of Deep Learning Methods and Robust Modality-Agnostic Training”, NeuroImage, 220: 117081, 2020. DOI: 10.1016/j.neuroimage.2020.117081 [8] Duan R, Tong J, Lin L, Levine LD, Sammel MD, Stoddard J, Li T, Schmid CH, Chu H, Chen Y. PALM: Patientcentered Treatment Ranking via Large-scale Multivariate Network Meta-analysis. medRxiv. 2020 Jan 1 [9] Wiesenfarth, Manuel, Annika Reinke, Bennett A. Landman, Matthias Eisenmann, Laura Aguilera Saiz, M. Jorge Cardoso, Lena Maier-Hein, and Annette Kopp-Schneider. "Methods and open-source toolkit for analyzing and visualizing challenge results." Scientific reports 11, no. 1 (2021): 1-15. [10] L. Maier-Hein et al., “Why rankings of biomedical image analysis competitions should be interpreted with care,” Nat. Commun., vol. 9, no. 1, pp. 1–13, Dec. 2018, doi: 10.1038/s41467-018-07619-7.
Segmentation, Collaborative Learning, Brain Tumors, Challenge, Federated Learning, MICCAI, Cancer
Segmentation, Collaborative Learning, Brain Tumors, Challenge, Federated Learning, MICCAI, Cancer
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 194 | |
| downloads | 72 |

Views provided by UsageCounts
Downloads provided by UsageCounts