Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/32740| Title: | Edge priors guided deep unrolling network for single image super-resolution |
| Authors: | Song, H Han, J Ma, H Jia, H Shen, X Gou, J Lai, Y Meng, H |
| Keywords: | super-resolution;edge priors;deep unrolling network |
| Issue Date: | 31-Dec-2025 |
| Publisher: | Elsevier |
| Citation: | Song, H. et al. (2026) 'Edge priors guided deep unrolling network for single image super-resolution', Expert Systems with Applications, 308, 131019, pp. 1 - 11. doi: 10.1016/j.eswa.2025.131019. |
| Abstract: | The field of single image super-resolution (SISR) has garnered significant consideration over the past few decades. The primary challenge of SISR lies in restoring the high-frequency details in low-resolution (LR) images, which are crucial for human perception. Current deep learning-based SISR methods either increase the depth of the network or indiscriminately incorporate emerging technologies, such as attention mechanisms, to address this challenge. However, these methods treat the deep networks as a black-box, achieving performance at the cost of efficiency and network redundancy, without carefully considering how internal components interact to enhance the reconstruction quality. To address this limitation, we incorporate edge priors into a classical image restoration model to design network framework and propose an edge priors guided deep unrolling network (EPGDUN), which consists of three components: Edge Feature Extraction Module (EFEM), Intermediate Variable Updating Module (IVUM), and Variable-Guided Reconstruction Module (VGRM). Specifically, we unroll the image restoration model with edge priors via a half-quadratic splitting and proximal gradient descent method to gain three subproblems, whose solving process corresponds to the iterative stages of the three submodules of EPGDUN. The combination of edge priors constrains the network output, strengthening the image boundaries, enhancing interpretability, and improving the network’s understanding of image structure. Extensive experiments illustrate that EPGDUN achieves performance on par with or exceeding that of the state-of-the-arts, uninterpretable black-box neural models and interpretable deep unrolling networks. These findings underscore EPGDUN’s potential to advance low-level vision applications and other domains requiring mathematical interpretability. The source code for EPGDUN will be available at https://github.com/songhp/EPGDUN. |
| Description: | Highlights:
• We propose an interpretable EPGDUN for single image super-resolution.
• We design a non-local block to gather a broader range of prior information.
• We develop a cross-fusion module to selectively fuse edge and image features.
• Our model demonstrates outstanding qualitative and quantitative performance. Data availability: Data will be made available on request. |
| URI: | https://bura.brunel.ac.uk/handle/2438/32740 |
| DOI: | https://doi.org/10.1016/j.eswa.2025.131019 |
| ISSN: | 0957-4174 |
| Other Identifiers: | ORCiD: Heping Song https://orcid.org/0000-0002-8583-2804 ORCiD: Jie Han https://orcid.org/0009-0000-6657-980X ORCiD: Hui Ma https://orcid.org/0009-0001-8140-1484 ORCiD: Hongjie Jia https://orcid.org/0000-0002-3354-5184 ORCiD: Xiangjun Shen https://orcid.org/0000-0002-3359-8972 ORCiD: Jianping Gou https://orcid.org/0000-0003-1413-0693 ORCiD: Yuping Lai https://orcid.org/0000-0002-3797-1228 ORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382 Article number: 131019 |
| Appears in Collections: | Dept of Electronic and Electrical Engineering Embargoed Research Papers |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| FullText.pdf | Embargoed until 31 December 2026. Copyright © 2026 lsevier Ltd. All rights reserved. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ (see: https://www.elsevier.com/about/policies/sharing). | 2.15 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License