Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31313
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, J-
dc.contributor.authorLin, J-
dc.contributor.authorJiang, X-
dc.contributor.authorZheng, W-
dc.contributor.authorZhong, L-
dc.contributor.authorPang, Y-
dc.contributor.authorMeng, H-
dc.contributor.authorLi, Z-
dc.date.accessioned2025-05-24T19:17:05Z-
dc.date.available2025-05-24T19:17:05Z-
dc.date.issued2025-05-15-
dc.identifierORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382-
dc.identifierArticle number: 16894-
dc.identifier.citationWu, J. et al. (2025) 'Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention', Scientific Reports, 15 (1), 16894, pp. 1 - 20. doi: 10.1038/s41598-025-02133-5.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31313-
dc.descriptionData availability: The code for the DPMA framework is publicly available at https://github.com/jia-W-w/DPKA. The experimental datasets used in this study were derived from the AAPM Low Dose CT Grand Challenge https://www.aapm.org/grandchallenge/lowdosect/.-
dc.description.abstractSparse-view CT reconstruction is a challenging ill-posed inverse problem, where insufficient projection data leads to degraded image quality with increased noise and artifacts. Recent deep learning approaches have shown promising results in CT reconstruction. However, existing methods often neglect projection data constraints and rely heavily on convolutional neural networks, resulting in limited feature extraction capabilities and inadequate adaptability. To address these limitations, we propose a Dual-domain deep Prior-guided Multi-scale fusion Attention (DPMA) model for sparse-view CT reconstruction, aiming to enhance reconstruction accuracy while ensuring data consistency and stability. First, we establish a residual regularization strategy that applies constraints on the difference between the prior image and target image, effectively integrating deep learning-based priors with model-based optimization. Second, we develop a multi-scale fusion attention mechanism that employs parallel pathways to simultaneously model global context, regional dependencies, and local details in a unified framework. Third, we incorporate a physics-informed consistency module based on range-null space decomposition to ensure adherence to projection data constraints. Experimental results demonstrate that DPMA achieves improved reconstruction quality compared to existing approaches, particularly in noise suppression, artifact reduction, and fine detail preservation.en_US
dc.description.sponsorshipThis work was supported by the National Natural Science Foundation of China (No. U21 A20447, No. 62471077, and No. 62171073), the Project of the Central Government in Guidance of Local Science and Technology Development (No. 2024ZYD0270), and the Southwest Medical University Natural Science Foundation (No. 2023ZD004).en_US
dc.format.extent1 - 20-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherSpringer Natureen_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectsparse-view CT reconstructionen_US
dc.subjectdeep prioren_US
dc.subjectmulti-scale fusion attentionen_US
dc.subjectmodel-based optimizationen_US
dc.subjectphysics-informed consistencyen_US
dc.subjectbiomedical engineeringen_US
dc.subjectcomputational scienceen_US
dc.subjectcomputer scienceen_US
dc.titleDual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attentionen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-05-12-
dc.identifier.doihttps://doi.org/10.1038/s41598-025-02133-5-
dc.relation.isPartOfScientific Reports-
pubs.issue1-
pubs.publication-statusPublished online-
pubs.volume15-
dc.identifier.eissn2045-2322-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.en-
dcterms.dateAccepted2025-05-12-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 The Author(s). Rights and permissions: Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by-nc-nd/4.0/.12.7 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons