Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30908
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPayette, K-
dc.contributor.authorSteger, C-
dc.contributor.authorLicandro, R-
dc.contributor.authorDe Dumast, P-
dc.contributor.authorLi, HB-
dc.contributor.authorBarkovich, M-
dc.contributor.authorLi, L-
dc.contributor.authorDannecker, M-
dc.contributor.authorChen, C-
dc.contributor.authorOuyang, C-
dc.contributor.authorMcconnell, N-
dc.contributor.authorMiron, A-
dc.contributor.authorLi, Y-
dc.contributor.authorUus, A-
dc.contributor.authorGrigorescu, I-
dc.contributor.authorRamirez Gilliland, P-
dc.contributor.authorSiddiquee, MMR-
dc.contributor.authorXu, D-
dc.contributor.authorMyronenko, A-
dc.contributor.authorWang, H-
dc.contributor.authorHuang, Z-
dc.contributor.authorYe, J-
dc.contributor.authorAlenya, M-
dc.contributor.authorComte, V-
dc.contributor.authorCamara, O-
dc.contributor.authorMasson, J-B-
dc.contributor.authorNilsson, A-
dc.contributor.authorGodard, C-
dc.contributor.authorMazher, M-
dc.contributor.authorQayyum, A-
dc.contributor.authorGao, Y-
dc.contributor.authorZhou, H-
dc.contributor.authorGao, S-
dc.contributor.authorFu, J-
dc.contributor.authorDong, G-
dc.contributor.authorWang, G-
dc.contributor.authorRieu, Z-
dc.contributor.authorYang, H-
dc.contributor.authorLee, M-
dc.contributor.authorPlotka, S-
dc.contributor.authorGrzeszczyk, MK-
dc.contributor.authorSitek, A-
dc.contributor.authorVargas Daza, L-
dc.contributor.authorUsma, S-
dc.contributor.authorArbelaez, P-
dc.contributor.authorLu, W-
dc.contributor.authorZhang, W-
dc.contributor.authorLiang, J-
dc.contributor.authorValabregue, R-
dc.contributor.authorJoshi, AA-
dc.contributor.authorNayak, KN-
dc.contributor.authorLeahy, RM-
dc.contributor.authorWilhelmi, L-
dc.contributor.authorDandliker, A-
dc.contributor.authorJi, H-
dc.contributor.authorGennari, AG-
dc.contributor.authorJakovcic, A-
dc.contributor.authorKlaic, M-
dc.contributor.authorAdzic, A-
dc.contributor.authorMarkovic, P-
dc.contributor.authorGrabaric, G-
dc.contributor.authorKasprian, G-
dc.contributor.authorDovjak, G-
dc.contributor.authorRados, M-
dc.contributor.authorVasung, L-
dc.contributor.authorBach Cuadra, M-
dc.contributor.authorJakab, A-
dc.date.accessioned2025-03-14T18:33:17Z-
dc.date.available2025-03-14T18:33:17Z-
dc.date.issued2024-10-30-
dc.identifierORCiD: Kelly Payette https://orcid.org/0000-0001-7041-0150-
dc.identifierORCiD: Roxane Licandro https://orcid.org/0000-0001-9066-4473-
dc.identifierORCiD: Hongwei Bran Li https://orcid.org/0000-0002-5328-6407-
dc.identifierORCiD: Liu Li https://orcid.org/0000-0003-2376-8162-
dc.identifierORCiD: Maik Dannecker https://orcid.org/0000-0001-9012-9606-
dc.identifierORCiD: Chen Chen https://orcid.org/0000-0002-3525-9755-
dc.identifierORCiD: Cheng Ouyang https://orcid.org/0000-0002-3069-8708-
dc.identifierORCiD: Alina Miron https://orcid.org/0000-0002-0068-4495-
dc.identifierORCiD: Yongmin Li https://orcid.org/0000-0003-1668-2440-
dc.identifierORCiD: Daguang Xu https://orcid.org/0000-0002-4621-881X-
dc.identifierORCiD: Valentin Comte https://orcid.org/0009-0001-7512-0256-
dc.identifierORCiD: Oscar Camara https://orcid.org/0000-0002-5125-6132-
dc.identifierORCiD: Moona Mazher https://orcid.org/0000-0003-4444-5776-
dc.identifierORCiD: Abdul Qayyum https://orcid.org/0000-0003-3102-1595-
dc.identifierORCiD: Shangqi Gao https://orcid.org/0000-0003-4567-1636-
dc.identifierORCiD: Guotai Wang https://orcid.org/0000-0002-8632-158X-
dc.identifierORCiD: Michal K. Grzeszczyk https://orcid.org/0000-0002-5304-1020-
dc.identifierORCiD: Pablo Arbelaez https://orcid.org/0000-0001-5244-2407-
dc.identifierORCiD: Wenhao Zhang https://orcid.org/0000-0002-8680-1743-
dc.identifierORCiD: Meritxell Bach Cuadra https://orcid.org/0000-0003-2730-4285-
dc.identifierORCiD: Andras Jakab https://orcid.org/0000-0001-6291-9889-
dc.identifier.citationPayette, K et al. (2025) 'Multi-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Results', IEEE Transactions on Medical Imaging, 44 (3), pp. 1257 - 1272. doi: 10.1109/TMI.2024.3485554.en_US
dc.identifier.issn0278-0062-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30908-
dc.description.abstractSegmentation is a critical step in analyzing the developing human fetal brain. There have been vast improvements in automatic segmentation methods in the past several years, and the Fetal Brain Tissue Annotation (FeTA) Challenge 2021 helped to establish an excellent standard of fetal brain segmentation. However, FeTA 2021 was a single center study, limiting real-world clinical applicability and acceptance. The multi-center FeTA Challenge 2022 focused on advancing the generalizability of fetal brain segmentation algorithms for magnetic resonance imaging (MRI). In FeTA 2022, the training dataset contained images and corresponding manually annotated multi-class labels from two imaging centers, and the testing data contained images from these two centers as well as two additional unseen centers. The multi-center data included different MR scanners, imaging parameters, and fetal brain super-resolution algorithms applied. 16 teams participated and 17 algorithms were evaluated. Here, the challenge results are presented, focusing on the generalizability of the submissions. Both in- and out-of-domain, the white matter and ventricles were segmented with the highest accuracy (Top Dice scores: 0.89, 0.87 respectively), while the most challenging structure remains the grey matter (Top Dice score: 0.75) due to anatomical complexity. The top 5 average Dices scores ranged from 0.81-0.82, the top 5 average 95th percentile Hausdorff distance values ranged from 2.3-2.5mm, and the top 5 volumetric similarity scores ranged from 0.90-0.92. The FeTA Challenge 2022 was able to successfully evaluate and advance generalizability of multi-class fetal brain tissue segmentation algorithms for MRI and it continues to benchmark new algorithms.en_US
dc.description.sponsorship10.13039/100014013-UK Research and Innovation (Grant Number: FLF (MR/T018119/1)); URPP Adaptive Brain Circuits in Development and Learning (AdaBD) Project; 10.13039/501100008494-Vontobel-Stiftung; Anna M?ller Grocholski Foundation; 10.13039/100010269-Wellcome Trust (Grant Number: Sir Henry Wellcome Fellowship (201374/Z/16/Z and /); 10.13039/501100008464-EMDO Stiftung; Prof. Dr Max Cloetta Foundation; 10.13039/501100001711-Schweizerischer Nationalfonds zur F?rderung der Wissenschaftlichen Forschung (Grant Number: SNSF 320030_184932, 205321?182602); 10.13039/501100000266-Engineering and Physical Sciences Research Council (Grant Number: EP/V034537/1); 10.13039/501100002428-Austrian Science Fund (Grant Number: FWF [P 35189-B], I 3925-B27); 10.13039/501100023312-Wellcome EPSRC Centre for Medical Engineering (Grant Number: WT203148/Z/16/Z); 10.13039/501100001821-Vienna Science and Technology Fund (Grant Number: WWTF [LS20-030]); 10.13039/501100006391-Centre d'Imagerie BioM?dicale; 10.13039/100000002-National Institutes of Health (Grant Number: Human Placenta Project?grant 1U01HD087202?01).en_US
dc.format.extent1257 - 1272-
dc.format.mediumPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsAttribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectdeep learningen_US
dc.subjectdomain generalizationen_US
dc.subjectfetal brain MRIen_US
dc.subjectmulti-class image segmentationen_US
dc.titleMulti-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Resultsen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/TMI.2024.3485554-
dc.relation.isPartOfIEEE Transactions on Medical Imaging-
pubs.issue3-
pubs.publication-statusPublished-
pubs.volume44-
dc.identifier.eissn1558-254X-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/5.85 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons