Fetal brain tissue annotation and segmentation challenge results

Kelly Payette*, Hongwei Bran Li, Priscille de Dumast, Roxane Licandro, Hui Ji, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Hao Liu, Yuchen Pei, Lisheng Wang, Ying Peng, Juanying Xie, Huiquan Zhang, Guiming Dong, Hao Fu, Guotai Wang, Zun Hyan Rieu, Donghyeon Kim, Hyun Gi KimDavood Karimi, Ali Gholipour, Helena R. Torres, Bruno Oliveira, João L. Vilaça, Yang Lin, Netanell Avisdris, Ori Ben-Zvi, Dafna Ben Bashat, Lucas Fidon, Michael Aertsen, Tom Vercauteren, Daniel Sobotka, Georg Langs, Mireia Alenyà, Maria Inmaculada Villanueva, Oscar Camara, Bella Specktor Fadida, Leo Joskowicz, Liao Weibin, Lv Yi, Li Xuesong, Moona Mazher, Abdul Qayyum, Domenec Puig, Hamza Kebiri, Zelin Zhang, Xinyi Xu, Dan Wu, Kuanlun Liao, Yixuan Wu, Jintai Chen, Yunzhi Xu, Li Zhao, Lana Vasung, Bjoern Menze, Meritxell Bach Cuadra, Andras Jakab

*Corresponding author for this work

Research output: Contribution to journalShort surveypeer-review

21 Scopus citations

Abstract

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.

Original languageEnglish
Article number102833
JournalMedical Image Analysis
Volume88
DOIs
StatePublished - Aug 2023

Funding

FundersFunder number
Agence Nationale de la Recherche
Hasler Stiftung
Université de Lausanne
Université de Genève
Centre Hospitalier Universitaire Vaudois
EMDO Stiftung
Hôpitaux Universitaires de Genève
Foundation for Research in Science and the Humanities
Universität Zürich
OPO-Stiftung
École Polytechnique Fédérale de Lausanne
EU H2020 Marie Sklodowska-Curie
Max Cloetta Foundation
NVIDIA
Anna Müller Grocholski Foundation
Horizon 2020
Wellcome Trust203148/Z/16/Z, WT101957, 203148
Austrian Science FundP 35189, I3925-B27, I 3925
Vienna Science and Technology FundLS20-065
Horizon 2020 Framework Programme765148
EP-SRCNS/A000049/1, NS/A000050/1
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung205321-182602, FK-21-125
Engineering and Physical Sciences Research CouncilNS/A000027/1
MedtronicRCSRF1819\7\34, K-74851-01-01

    Keywords

    • Congenital disorders
    • Fetal brain MRI
    • Multi-class image segmentation
    • Super-resolution reconstructions

    Fingerprint

    Dive into the research topics of 'Fetal brain tissue annotation and segmentation challenge results'. Together they form a unique fingerprint.

    Cite this