Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32740
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSong, H-
dc.contributor.authorHan, J-
dc.contributor.authorMa, H-
dc.contributor.authorJia, H-
dc.contributor.authorShen, X-
dc.contributor.authorGou, J-
dc.contributor.authorLai, Y-
dc.contributor.authorMeng, H-
dc.date.accessioned2026-01-27T10:57:10Z-
dc.date.available2026-01-27T10:57:10Z-
dc.date.issued2025-12-31-
dc.identifierORCiD: Heping Song https://orcid.org/0000-0002-8583-2804-
dc.identifierORCiD: Jie Han https://orcid.org/0009-0000-6657-980X-
dc.identifierORCiD: Hui Ma https://orcid.org/0009-0001-8140-1484-
dc.identifierORCiD: Hongjie Jia https://orcid.org/0000-0002-3354-5184-
dc.identifierORCiD: Xiangjun Shen https://orcid.org/0000-0002-3359-8972-
dc.identifierORCiD: Jianping Gou https://orcid.org/0000-0003-1413-0693-
dc.identifierORCiD: Yuping Lai https://orcid.org/0000-0002-3797-1228-
dc.identifierORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382-
dc.identifierArticle number: 131019-
dc.identifier.citationSong, H. et al. (2026) 'Edge priors guided deep unrolling network for single image super-resolution', Expert Systems with Applications, 308, 131019, pp. 1 - 11. doi: 10.1016/j.eswa.2025.131019.en_US
dc.identifier.issn0957-4174-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/32740-
dc.descriptionHighlights: • We propose an interpretable EPGDUN for single image super-resolution. • We design a non-local block to gather a broader range of prior information. • We develop a cross-fusion module to selectively fuse edge and image features. • Our model demonstrates outstanding qualitative and quantitative performance.en_US
dc.descriptionData availability: Data will be made available on request.-
dc.description.abstractThe field of single image super-resolution (SISR) has garnered significant consideration over the past few decades. The primary challenge of SISR lies in restoring the high-frequency details in low-resolution (LR) images, which are crucial for human perception. Current deep learning-based SISR methods either increase the depth of the network or indiscriminately incorporate emerging technologies, such as attention mechanisms, to address this challenge. However, these methods treat the deep networks as a black-box, achieving performance at the cost of efficiency and network redundancy, without carefully considering how internal components interact to enhance the reconstruction quality. To address this limitation, we incorporate edge priors into a classical image restoration model to design network framework and propose an edge priors guided deep unrolling network (EPGDUN), which consists of three components: Edge Feature Extraction Module (EFEM), Intermediate Variable Updating Module (IVUM), and Variable-Guided Reconstruction Module (VGRM). Specifically, we unroll the image restoration model with edge priors via a half-quadratic splitting and proximal gradient descent method to gain three subproblems, whose solving process corresponds to the iterative stages of the three submodules of EPGDUN. The combination of edge priors constrains the network output, strengthening the image boundaries, enhancing interpretability, and improving the network’s understanding of image structure. Extensive experiments illustrate that EPGDUN achieves performance on par with or exceeding that of the state-of-the-arts, uninterpretable black-box neural models and interpretable deep unrolling networks. These findings underscore EPGDUN’s potential to advance low-level vision applications and other domains requiring mathematical interpretability. The source code for EPGDUN will be available at https://github.com/songhp/EPGDUN.en_US
dc.description.sponsorshipThis work was supported by the National Natural Science Foundation of China under Grants 62472201, 62376108 and 62172193.en_US
dc.format.extent1 - 11-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectsuper-resolutionen_US
dc.subjectedge priorsen_US
dc.subjectdeep unrolling networken_US
dc.titleEdge priors guided deep unrolling network for single image super-resolutionen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-12-26-
dc.identifier.doihttps://doi.org/10.1016/j.eswa.2025.131019-
dc.relation.isPartOfExpert Systems with Applications-
pubs.issue1 May 2026-
pubs.publication-statusPublished-
pubs.volume308-
dc.identifier.eissn1873-6793-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.en-
dcterms.dateAccepted2025-12-26-
dc.rights.holderElsevier Ltd.-
dc.contributor.orcidSong, Heping [0000-0002-8583-2804]-
dc.contributor.orcidHan, Jie [0009-0000-6657-980X]-
dc.contributor.orcidMa, Hui [0009-0001-8140-1484]-
dc.contributor.orcidJia, Hongjie [0000-0002-3354-5184]-
dc.contributor.orcidShen, Xiangjun [0000-0002-3359-8972]-
dc.contributor.orcidGou, Jianping [0000-0003-1413-0693]-
dc.contributor.orcidLai, Yuping [0000-0002-3797-1228]-
dc.contributor.orcidMeng, Hongying [0000-0002-8836-1382]-
Appears in Collections:Dept of Electronic and Electrical Engineering Embargoed Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 31 December 2026. Copyright © 2026 lsevier Ltd. All rights reserved. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ (see: https://www.elsevier.com/about/policies/sharing).2.15 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons