Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/30985
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJiang, F-
dc.contributor.authorTang, C-
dc.contributor.authorDong, L-
dc.contributor.authorWang, K-
dc.contributor.authorYang, K-
dc.contributor.authorPan, C-
dc.date.accessioned2025-03-28T08:15:17Z-
dc.date.available2025-03-28T08:15:17Z-
dc.date.issued2025-03-04-
dc.identifierORCiD: Feibo Jiang https://orcid.org/0000-0002-0235-0253-
dc.identifierORCiD: Li Dong https://orcid.org/0000-0002-0127-8480-
dc.identifierORCiD: Kezhi Wang https://orcid.org/0000-0001-8602-0800-
dc.identifierORCiD: Kun Yang https://orcid.org/0000-0002-6782-6689-
dc.identifierORCiD: Cunhua Pan https://orcid.org/0000-0001-5286-7958-
dc.identifier.citationJiang, F. et al. (2025) 'Visual Language Model based Cross-modal Semantic Communication Systems', IEEE Transactions on Wireless Communications, 0 (early access), pp. 1 - 13. doi: 10.1109/TWC.2025.3539526.en_US
dc.identifier.issn1536-1276-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/30985-
dc.description.abstractSemantic Communication (SC) has emerged as a novel communication paradigm in recent years. Nevertheless, extant Image Semantic Communication (ISC) systems face several challenges in dynamic environments, including low information density, catastrophic forgetting, and uncertain Signal-to-Noise Ratio (SNR). To address these challenges, we propose a novel Vision-Language Model-based Cross-modal Semantic Communication (VLM-CSC) system. The VLM-CSC comprises three novel components: (1) Cross-modal Knowledge Base (CKB) is used to extract high-density textual semantics from the semantically sparse image at the transmitter and reconstruct the original image based on textual semantics at the receiver. The transmission of high-density semantics contributes to alleviating bandwidth pressure. (2) Memory-assisted Encoder and Decoder (MED) employ a hybrid long/short-term memory mechanism, enabling the semantic encoder and decoder to overcome catastrophic forgetting in dynamic environments when there is a drift in the distribution of semantic features. (3) Noise Attention Module (NAM) employs attention mechanisms to adaptively adjust the semantic coding and the channel coding based on SNR, ensuring the robustness of the CSC system. The experimental simulations validate the effectiveness, adaptability, and robustness of the CSC system.en_US
dc.description.sponsorship10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 41904127 41904127, and 62132004), in part by the Hunan Provincial Natural Science Foundation of China under Grant 2024JJ5270, in part by the Open Project of Xiangjiang Laboratory under Grant 22XJ03011, in part by the Scientific Research Fund of the Hunan Provincial Education Department under Grant 22B0663, in part by the Changsha Natural Science Foundation under Grants kq2402098 and kq2402162, in part by the Jiangsu Major Project on Basic Researches under Grant BK20243059 and Gusu Innovation Project for under Grant ZXL2024360.en_US
dc.format.extent1 - 13-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCopyright © 2025 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. See: https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/.-
dc.rights.urihttps://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/-
dc.subjectsemantic communicationen_US
dc.subjectknowledge baseen_US
dc.subjectvision language modelen_US
dc.subjectlarge language modelen_US
dc.subjectcontinual learningen_US
dc.titleVisual Language Model based Cross-modal Semantic Communication Systemsen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/TWC.2025.3539526-
dc.relation.isPartOfIEEE Transactions on Wireless Communications-
pubs.issue00-
pubs.publication-statusPublished-
pubs.volume0-
dc.identifier.eissn1558-2248-
dc.rights.holderInstitute of Electrical and Electronics Engineers (IEEE)-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. See: https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/.4.62 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.