Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31313
Title: | Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention |
Authors: | Wu, J Lin, J Jiang, X Zheng, W Zhong, L Pang, Y Meng, H Li, Z |
Keywords: | sparse-view CT reconstruction;deep prior;multi-scale fusion attention;model-based optimization;physics-informed consistency;biomedical engineering;computational science;computer science |
Issue Date: | 15-May-2025 |
Publisher: | Springer Nature |
Citation: | Wu, J. et al. (2025) 'Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention', Scientific Reports, 15 (1), 16894, pp. 1 - 20. doi: 10.1038/s41598-025-02133-5. |
Abstract: | Sparse-view CT reconstruction is a challenging ill-posed inverse problem, where insufficient projection data leads to degraded image quality with increased noise and artifacts. Recent deep learning approaches have shown promising results in CT reconstruction. However, existing methods often neglect projection data constraints and rely heavily on convolutional neural networks, resulting in limited feature extraction capabilities and inadequate adaptability. To address these limitations, we propose a Dual-domain deep Prior-guided Multi-scale fusion Attention (DPMA) model for sparse-view CT reconstruction, aiming to enhance reconstruction accuracy while ensuring data consistency and stability. First, we establish a residual regularization strategy that applies constraints on the difference between the prior image and target image, effectively integrating deep learning-based priors with model-based optimization. Second, we develop a multi-scale fusion attention mechanism that employs parallel pathways to simultaneously model global context, regional dependencies, and local details in a unified framework. Third, we incorporate a physics-informed consistency module based on range-null space decomposition to ensure adherence to projection data constraints. Experimental results demonstrate that DPMA achieves improved reconstruction quality compared to existing approaches, particularly in noise suppression, artifact reduction, and fine detail preservation. |
Description: | Data availability: The code for the DPMA framework is publicly available at https://github.com/jia-W-w/DPKA. The experimental datasets used in this study were derived from the AAPM Low Dose CT Grand Challenge https://www.aapm.org/grandchallenge/lowdosect/. |
URI: | https://bura.brunel.ac.uk/handle/2438/31313 |
DOI: | https://doi.org/10.1038/s41598-025-02133-5 |
Other Identifiers: | ORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382 Article number: 16894 |
Appears in Collections: | Dept of Electronic and Electrical Engineering Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2025 The Author(s). Rights and permissions: Open Access. This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by-nc-nd/4.0/. | 12.7 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License