Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/28412
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJodeiri, A-
dc.contributor.authorSeyedarabi, H-
dc.contributor.authorDanishvar, S-
dc.contributor.authorShafiei, SH-
dc.contributor.authorSales, JG-
dc.contributor.authorKhoori, M-
dc.contributor.authorRahimi, S-
dc.contributor.authorMortazavi, SMJ-
dc.date.accessioned2024-02-26T19:58:57Z-
dc.date.available2024-02-26T19:58:57Z-
dc.date.issued2024-02-17-
dc.identifierORCiD: Sebelan Danishvar https://orcid.org/0000-0002-8258-0437-
dc.identifierORCiD: Moein Khoori https://orcid.org/0000-0002-0185-8733-
dc.identifierORCiD: Seyed Mohammad Javad Mortazavi https://orcid.org/0000-0003-4189-7777-
dc.identifier194-
dc.identifier.citationJodeiri, A. et al. (2024) 'Concurrent Learning Approach for Estimation of Pelvic Tilt from Anterior–Posterior Radiograph', Bioengineering, 11 (2), 194, pp. 1 - 13. doi: 10.3390/bioengineering11020194.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/28412-
dc.descriptionData Availability Statement: Dataset available on request from the authors.en_US
dc.description.abstractAccurate and reliable estimation of the pelvic tilt is one of the essential pre-planning factors for total hip arthroplasty to prevent common post-operative complications such as implant impingement and dislocation. Inspired by the latest advances in deep learning-based systems, our focus in this paper has been to present an innovative and accurate method for estimating the functional pelvic tilt (PT) from a standing anterior–posterior (AP) radiography image. We introduce an encoder–decoder-style network based on a concurrent learning approach called VGG-UNET (VGG embedded in U-NET), where a deep fully convolutional network known as VGG is embedded at the encoder part of an image segmentation network, i.e., U-NET. In the bottleneck of the VGG-UNET, in addition to the decoder path, we use another path utilizing light-weight convolutional and fully connected layers to combine all extracted feature maps from the final convolution layer of VGG and thus regress PT. In the test phase, we exclude the decoder path and consider only a single target task i.e., PT estimation. The absolute errors obtained using VGG-UNET, VGG, and Mask R-CNN are 3.04 ± 2.49, 3.92 ± 2.92, and 4.97 ± 3.87, respectively. It is observed that the VGG-UNET leads to a more accurate prediction with a lower standard deviation (STD). Our experimental results demonstrate that the proposed multi-task network leads to a significantly improved performance compared to the best-reported results based on cascaded networks.en_US
dc.description.sponsorshipThis research received no external funding.en_US
dc.format.extent1 - 13-
dc.languageen-
dc.language.isoen_USen_US
dc.publisherMDPIen_US
dc.rightsCopyright © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjecttotal hip arthroplastyen_US
dc.subjectpelvic tilten_US
dc.subjectmulti-task learningen_US
dc.subjectconvolutional neural networken_US
dc.subjectsegmentationen_US
dc.subjectVGGen_US
dc.subjectU-NETen_US
dc.titleConcurrent Learning Approach for Estimation of Pelvic Tilt from Anterior–Posterior Radiographen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.3390/bioengineering11020194-
dc.relation.isPartOfBioengineering-
pubs.issue2-
pubs.publication-statusPublished online-
pubs.volume11-
dc.identifier.eissn2306-5354-
dc.rights.holderThe authors-
Appears in Collections:Dept of Civil and Environmental Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).4.13 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons