Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32054
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPeng, Y-
dc.contributor.authorXiang, L-
dc.contributor.authorYang, K-
dc.contributor.authorJiang, F-
dc.contributor.authorWang, K-
dc.contributor.authorWu, DO-
dc.date.accessioned2025-09-26T19:57:11Z-
dc.date.available2025-09-26T19:57:11Z-
dc.date.issued2025-09-16-
dc.identifierORCiD: Yubo Peng https://orcid.org/0000-0001-9684-2971-
dc.identifierORCiD: Luping Xiang https://orcid.org/0000-0003-1465-6708-
dc.identifierORCiD: Kun Yang https://orcid.org/0000-0002-6782-6689-
dc.identifierORCiD: Feibo Jiang https://orcid.org/0000-0002-0235-0253-
dc.identifierORCiD: Kezhi Wang https://orcid.org/0000-0001-8602-0800-
dc.identifierORCiD: Dapeng Oliver Wu https://orcid.org/0000-0003-1755-0183-
dc.identifier.citationPeng, Y. et al. (2025) 'SIMAC: A Semantic-Driven Integrated Multimodal Sensing And Communication Framework', IEEE Journal on Selected Areas in Communications, 0 (early access) pp. 1 - 16. doi: 10.1109/jsac.2025.3610398.en_US
dc.identifier.issn0733-8716-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/32054-
dc.description.abstractTraditional unimodal sensing faces limitations in accuracy and capability, and its decoupled implementation with communication systems increases latency in bandwidth-constrained environments. Additionally, single-task-oriented sensing systems fail to address users’ diverse demands. To overcome these challenges, we propose a semantic-driven integrated multimodal sensing and communication (SIMAC) framework. This framework leverages a joint source-channel coding architecture to achieve simultaneous sensing, decoding, and transmission of sensing results. Specifically, SIMAC first introduces a multimodal semantic fusion (MSF) network, which employs two extractors to extract semantic information from radar signals and images, respectively. MSF then applies cross-attention mechanisms to fuse these unimodal features and generate multimodal semantic representations. Secondly, we present a large language model (LLM)-based semantic encoder (LSE), where relevant communication parameters and multimodal semantics are mapped into a unified latent space and input to the LLM, enabling channel-adaptive semantic encoding. Thirdly, a task-oriented sensing semantic decoder (SSD) is proposed, in which different decoded heads are designed according to the specific needs of tasks. Simultaneously, a multi-task learning strategy is introduced to train the SIMAC framework, achieving diverse sensing services. Finally, experimental simulations demonstrate that the proposed framework achieves diverse and higher-accuracy sensing services.en_US
dc.description.sponsorshipThe paper was partly funded by Jiangsu Major Project on Fundamental Research (Grant No.: BK20243059), Gusu Innovation Project (Grant No.: ZXL2024360), High-Tech District of Suzhou City (Grant No.: RC2025001) and Natural Science Foundation of China (Grant No. 62132004 and 62301122), the Major Program Project of Xiangjiang Laboratory (Grant No. XJ2023001 and XJ2022001), and Qiyuan Lab Innovation Fund (Grant No. 2022-JCJQ-LA-001-088).-
dc.format.extent1 - 16-
dc.format.mediumPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCreative Commons Attribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectintegrated multimodal sensing and communicationsen_US
dc.subjectsemantic communicationen_US
dc.subjectlarge language modelen_US
dc.subjectmulti-task learningen_US
dc.titleSIMAC: A Semantic-Driven Integrated Multimodal Sensing And Communication Frameworken_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/jsac.2025.3610398-
dc.relation.isPartOfIEEE Journal on Selected Areas in Communications-
pubs.issue0-
pubs.publication-statusPublished-
pubs.volume00-
dc.identifier.eissn1558-0008-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Author(s)-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf“For the purpose of open access, the author(s) has applied a Creative Commons Attribution (CC BY) license to any Accepted Manuscript version arising.”7.56 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons