Fetal brain tissue annotation and segmentation challenge results

Kelly Payette*, Hongwei Bran Li, Priscille de Dumast, Roxane Licandro, Hui Ji, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Hao Liu, Yuchen Pei, Lisheng Wang, Ying Peng, Juanying Xie, Huiquan Zhang, Guiming Dong, Hao Fu, Guotai Wang, Zun Hyan Rieu, Donghyeon Kim, Hyun Gi KimDavood Karimi, Ali Gholipour, Helena R. Torres, Bruno Oliveira, João L. Vilaça, Yang Lin, Netanell Avisdris, Ori Ben-Zvi, Dafna Ben Bashat, Lucas Fidon, Michael Aertsen, Tom Vercauteren, Daniel Sobotka, Georg Langs, Mireia Alenyà, Maria Inmaculada Villanueva, Oscar Camara, Bella Specktor Fadida, Leo Joskowicz, Liao Weibin, Lv Yi, Li Xuesong, Moona Mazher, Abdul Qayyum, Domenec Puig, Hamza Kebiri, Zelin Zhang, Xinyi Xu, Dan Wu, Kuanlun Liao, Yixuan Wu, Jintai Chen, Yunzhi Xu, Li Zhao, Lana Vasung, Bjoern Menze, Meritxell Bach Cuadra, Andras Jakab

*Corresponding author for this work

Research output: Contribution to journalShort surveypeer-review

12 Scopus citations


In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.

Original languageEnglish
Article number102833
JournalMedical Image Analysis
StatePublished - Aug 2023


FundersFunder number
Anna Müller Grocholski Foundation
EP-SRCNS/A000027/1, NS/A000049/1, NS/A000050/1
EU H2020 Marie Sklodowska-Curie765148
Max Cloetta Foundation
MedtronicK-74851-01-01, RCSRF1819\7\34
Wellcome Trust203148/Z/16/Z, WT101957
Wellcome Trust
Foundation for Research in Science and the Humanities
Agence Nationale de la Recherche
École Polytechnique Fédérale de Lausanne
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung205321-182602, FK-21-125
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung
Vienna Science and Technology FundLS20-065
Vienna Science and Technology Fund
Austrian Science FundP 35189, I3925-B27
Austrian Science Fund
Hasler Stiftung
Hôpitaux Universitaires de Genève
Université de Genève
Université de Lausanne
Centre Hospitalier Universitaire Vaudois
Universität Zürich
Horizon 2020
EMDO Stiftung


    • Congenital disorders
    • Fetal brain MRI
    • Multi-class image segmentation
    • Super-resolution reconstructions


    Dive into the research topics of 'Fetal brain tissue annotation and segmentation challenge results'. Together they form a unique fingerprint.

    Cite this