Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/32740Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Song, H | - |
| dc.contributor.author | Han, J | - |
| dc.contributor.author | Ma, H | - |
| dc.contributor.author | Jia, H | - |
| dc.contributor.author | Shen, X | - |
| dc.contributor.author | Gou, J | - |
| dc.contributor.author | Lai, Y | - |
| dc.contributor.author | Meng, H | - |
| dc.date.accessioned | 2026-01-27T10:57:10Z | - |
| dc.date.available | 2026-01-27T10:57:10Z | - |
| dc.date.issued | 2025-12-31 | - |
| dc.identifier | ORCiD: Heping Song https://orcid.org/0000-0002-8583-2804 | - |
| dc.identifier | ORCiD: Jie Han https://orcid.org/0009-0000-6657-980X | - |
| dc.identifier | ORCiD: Hui Ma https://orcid.org/0009-0001-8140-1484 | - |
| dc.identifier | ORCiD: Hongjie Jia https://orcid.org/0000-0002-3354-5184 | - |
| dc.identifier | ORCiD: Xiangjun Shen https://orcid.org/0000-0002-3359-8972 | - |
| dc.identifier | ORCiD: Jianping Gou https://orcid.org/0000-0003-1413-0693 | - |
| dc.identifier | ORCiD: Yuping Lai https://orcid.org/0000-0002-3797-1228 | - |
| dc.identifier | ORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382 | - |
| dc.identifier | Article number: 131019 | - |
| dc.identifier.citation | Song, H. et al. (2026) 'Edge priors guided deep unrolling network for single image super-resolution', Expert Systems with Applications, 308, 131019, pp. 1 - 11. doi: 10.1016/j.eswa.2025.131019. | en_US |
| dc.identifier.issn | 0957-4174 | - |
| dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/32740 | - |
| dc.description | Highlights: • We propose an interpretable EPGDUN for single image super-resolution. • We design a non-local block to gather a broader range of prior information. • We develop a cross-fusion module to selectively fuse edge and image features. • Our model demonstrates outstanding qualitative and quantitative performance. | en_US |
| dc.description | Data availability: Data will be made available on request. | - |
| dc.description.abstract | The field of single image super-resolution (SISR) has garnered significant consideration over the past few decades. The primary challenge of SISR lies in restoring the high-frequency details in low-resolution (LR) images, which are crucial for human perception. Current deep learning-based SISR methods either increase the depth of the network or indiscriminately incorporate emerging technologies, such as attention mechanisms, to address this challenge. However, these methods treat the deep networks as a black-box, achieving performance at the cost of efficiency and network redundancy, without carefully considering how internal components interact to enhance the reconstruction quality. To address this limitation, we incorporate edge priors into a classical image restoration model to design network framework and propose an edge priors guided deep unrolling network (EPGDUN), which consists of three components: Edge Feature Extraction Module (EFEM), Intermediate Variable Updating Module (IVUM), and Variable-Guided Reconstruction Module (VGRM). Specifically, we unroll the image restoration model with edge priors via a half-quadratic splitting and proximal gradient descent method to gain three subproblems, whose solving process corresponds to the iterative stages of the three submodules of EPGDUN. The combination of edge priors constrains the network output, strengthening the image boundaries, enhancing interpretability, and improving the network’s understanding of image structure. Extensive experiments illustrate that EPGDUN achieves performance on par with or exceeding that of the state-of-the-arts, uninterpretable black-box neural models and interpretable deep unrolling networks. These findings underscore EPGDUN’s potential to advance low-level vision applications and other domains requiring mathematical interpretability. The source code for EPGDUN will be available at https://github.com/songhp/EPGDUN. | en_US |
| dc.description.sponsorship | This work was supported by the National Natural Science Foundation of China under Grants 62472201, 62376108 and 62172193. | en_US |
| dc.format.extent | 1 - 11 | - |
| dc.format.medium | Print-Electronic | - |
| dc.language | English | - |
| dc.language.iso | en_US | en_US |
| dc.publisher | Elsevier | en_US |
| dc.rights | Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International | - |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | - |
| dc.subject | super-resolution | en_US |
| dc.subject | edge priors | en_US |
| dc.subject | deep unrolling network | en_US |
| dc.title | Edge priors guided deep unrolling network for single image super-resolution | en_US |
| dc.type | Article | en_US |
| dc.date.dateAccepted | 2025-12-26 | - |
| dc.identifier.doi | https://doi.org/10.1016/j.eswa.2025.131019 | - |
| dc.relation.isPartOf | Expert Systems with Applications | - |
| pubs.issue | 1 May 2026 | - |
| pubs.publication-status | Published | - |
| pubs.volume | 308 | - |
| dc.identifier.eissn | 1873-6793 | - |
| dc.rights.license | https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.en | - |
| dcterms.dateAccepted | 2025-12-26 | - |
| dc.rights.holder | Elsevier Ltd. | - |
| dc.contributor.orcid | Song, Heping [0000-0002-8583-2804] | - |
| dc.contributor.orcid | Han, Jie [0009-0000-6657-980X] | - |
| dc.contributor.orcid | Ma, Hui [0009-0001-8140-1484] | - |
| dc.contributor.orcid | Jia, Hongjie [0000-0002-3354-5184] | - |
| dc.contributor.orcid | Shen, Xiangjun [0000-0002-3359-8972] | - |
| dc.contributor.orcid | Gou, Jianping [0000-0003-1413-0693] | - |
| dc.contributor.orcid | Lai, Yuping [0000-0002-3797-1228] | - |
| dc.contributor.orcid | Meng, Hongying [0000-0002-8836-1382] | - |
| Appears in Collections: | Dept of Electronic and Electrical Engineering Embargoed Research Papers | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| FullText.pdf | Embargoed until 31 December 2026. Copyright © 2026 lsevier Ltd. All rights reserved. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ (see: https://www.elsevier.com/about/policies/sharing). | 2.15 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License