Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31831
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorhttps://doi.org/10.1109/ACCESS.2025.3526619-
dc.contributor.authorZubair, M-
dc.contributor.authorRais, HM-
dc.contributor.authorAlazemi, T-
dc.date.accessioned2025-08-26T11:01:41Z-
dc.date.available2025-08-26T11:01:41Z-
dc.date.issued2025-01-07-
dc.identifierORCiD: Muhammad Zubair https://orcid.org/0000-0002-8457-0208-
dc.identifierORCiD: Helmi Md Rais https://orcid.org/0000-0002-7878-965X-
dc.identifierORCiD: Talal Alazemi https://orcid.org/0009-0004-1859-2304-
dc.identifier.citationZubair, M., Rais, H.M. and Alazemi, T. (2025) 'A Novel Attention-Guided Enhanced U-Net With Hybrid Edge-Preserving Structural Loss for Low-Dose CT Image Denoising', IEEE Access, 13, pp. 6909 - 6923. doi: 10.1109/ACCESS.2025.3526619.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31831-
dc.descriptionData Availability Statement: The data supporting this study are available from the corresponding author upon reasonable request.en_US
dc.description.abstractComputed Tomography (CT) scan, pivotal for medical diagnostics, involves exposure to electromagnetic radiation, potentially elevating the risk of leukemia and cancer. Low-dose CT (LDCT) imaging has emerged to mitigate these risks, extensively reducing radiation exposure by up to 86%. However, it significantly reduces the quality of LDCT images and introduces noise and artifacts, degrading the diagnostic accuracy of the Computer Aided Diagnostic (CAD) system. This study presents a novel U-Net architecture, featuring several key enhancements. The model integrates residual blocks to improve feature representation and employs a custom hybrid loss function that combines structural loss with gradient regularization using the Euclidean norm, promoting superior CT image quality retention. Additionally, incorporating Attention Gates in the up-sampling layers of a proposed model optimizes the extraction of critical features, ensuring more precise denoising of CT images. The proposed model undergoes iterative training, using a custom loss function to refine its parameters and improve CT image denoising progressively. Its performance is rigorously evaluated both qualitatively and quantitatively on the ‘2016 Low-dose CT AAPM Grand Challenge dataset’. The results, assessed through the metrics Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Square Error (RMSE), demonstrated promising improvements compared to state-of-the-art techniques. The model effectively reduces noise while preserving critical fine details, establishing itself as a highly efficient solution for LDCT image denoising.en_US
dc.description.sponsorshipInstitute of Emerging Digital Technologies (EDiT) & Center For Cyber Physical Systems (C2PS), Universiti Teknologi PETRONAS, Seri Iskandar, Malaysia.en_US
dc.format.extent6909 - 6923-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCreative Commons Attribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectattention gateen_US
dc.subjectdeep learningen_US
dc.subjectimage enhancementen_US
dc.subjectLDCT image denoisingen_US
dc.subjectresidual blocksen_US
dc.titleA Novel Attention-Guided Enhanced U-Net With Hybrid Edge-Preserving Structural Loss for Low-Dose CT Image Denoisingen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-01-02-
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2025.3526619-
pubs.volume13-
dc.identifier.eissn2169-3536-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dcterms.dateAccepted2025-01-02-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/2.35 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons