Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32086
Title: Representation Sampling and Hybrid Transformer Network for Image Compressed Sensing
Authors: Song, H
Gong, J
Jia, H
Shen, X
Gou, J
Meng, H
Wang, L
Keywords: compressed sensing;deep unrolling network;representation sampling
Issue Date: 29-Sep-2025
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Citation: Song, H. et al. (2025) 'Representation Sampling and Hybrid Transformer Network for Image Compressed Sensing', IEEE Transactions on Circuits and Systems for Video Technology, 0 (early access), pp. 1 - 15. doi: 10.1109/tcsvt.2025.3614371.
Abstract: Deep unrolling networks (DUNs) have attracted substantial attention in the field of image compressed sensing (CS) due to their superior performance and good interpretability by recasting optimization algorithms as deep networks. However, existing DUNs suffer from low sampling efficiency, and the improvement in reconstruction quality heavily relies on large model complexity. To address these issues, we propose a lightweight Representation Sampling and Hybrid Transformer Network (RHT-Net). Firstly, we propose a Representation-CS (RCS) model to extract high-level features to achieve efficient sampling. This sampling strategy leads to highly dense, semantically rich and extremely compact features without observing the original pixels, which also reduces the cross-domain loss during iteration. Secondly, we design a Tri-Scale Sparse Denoising (TSSD) module in the deep unrolling stages to extend sparse proximal projections, leveraging multi-scale auxiliary variables to enhance multi-feature flow and memory effects. Thirdly, we develop a hybrid Transformer module that includes a Global Cross Attention (GCA) block and a Window Local Attention (WLA) block, using the measurements to cross-estimate the reconstruction error, thereby generating finer spatial details and improving local recovery. Experiments demonstrate that RHT-Net enhanced version outperforms the current state-of-the-art methods by up to 1.17dB in PSNR. The lightweight RHT-Net achieves a 0.43dB gain while reducing model parameters by up to 22 times. The code will be released publicly at https://github.com/songhp/RHTNet.
URI: https://bura.brunel.ac.uk/handle/2438/32086
DOI: https://doi.org/10.1109/tcsvt.2025.3614371
ISSN: 1051-8215
Other Identifiers: ORCiD: Heping Song https://orcid.org/0000-0002-8583-2804
ORCiD: Jingyao Gong https://orcid.org/0009-0009-5907-5836
ORCiD: Hongjie Jia https://orcid.org/0000-0002-3354-5184
ORCiD: Xiangjun Shen https://orcid.org/0000-0002-3359-8972
ORCiD: Hongying Meng https://orcid.org/0000-0002-8836-1382
ORCiD: Le Wang https://orcid.org/0000-0001-6636-6396
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works ( https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ ).15.38 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.