Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/32054
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Peng, Y | - |
dc.contributor.author | Xiang, L | - |
dc.contributor.author | Yang, K | - |
dc.contributor.author | Jiang, F | - |
dc.contributor.author | Wang, K | - |
dc.contributor.author | Wu, DO | - |
dc.date.accessioned | 2025-09-26T19:57:11Z | - |
dc.date.available | 2025-09-26T19:57:11Z | - |
dc.date.issued | 2025-09-16 | - |
dc.identifier | ORCiD: Yubo Peng https://orcid.org/0000-0001-9684-2971 | - |
dc.identifier | ORCiD: Luping Xiang https://orcid.org/0000-0003-1465-6708 | - |
dc.identifier | ORCiD: Kun Yang https://orcid.org/0000-0002-6782-6689 | - |
dc.identifier | ORCiD: Feibo Jiang https://orcid.org/0000-0002-0235-0253 | - |
dc.identifier | ORCiD: Kezhi Wang https://orcid.org/0000-0001-8602-0800 | - |
dc.identifier | ORCiD: Dapeng Oliver Wu https://orcid.org/0000-0003-1755-0183 | - |
dc.identifier.citation | Peng, Y. et al. (2025) 'SIMAC: A Semantic-Driven Integrated Multimodal Sensing And Communication Framework', IEEE Journal on Selected Areas in Communications, 0 (early access) pp. 1 - 16. doi: 10.1109/jsac.2025.3610398. | en_US |
dc.identifier.issn | 0733-8716 | - |
dc.identifier.uri | https://bura.brunel.ac.uk/handle/2438/32054 | - |
dc.description.abstract | Traditional unimodal sensing faces limitations in accuracy and capability, and its decoupled implementation with communication systems increases latency in bandwidth-constrained environments. Additionally, single-task-oriented sensing systems fail to address users’ diverse demands. To overcome these challenges, we propose a semantic-driven integrated multimodal sensing and communication (SIMAC) framework. This framework leverages a joint source-channel coding architecture to achieve simultaneous sensing, decoding, and transmission of sensing results. Specifically, SIMAC first introduces a multimodal semantic fusion (MSF) network, which employs two extractors to extract semantic information from radar signals and images, respectively. MSF then applies cross-attention mechanisms to fuse these unimodal features and generate multimodal semantic representations. Secondly, we present a large language model (LLM)-based semantic encoder (LSE), where relevant communication parameters and multimodal semantics are mapped into a unified latent space and input to the LLM, enabling channel-adaptive semantic encoding. Thirdly, a task-oriented sensing semantic decoder (SSD) is proposed, in which different decoded heads are designed according to the specific needs of tasks. Simultaneously, a multi-task learning strategy is introduced to train the SIMAC framework, achieving diverse sensing services. Finally, experimental simulations demonstrate that the proposed framework achieves diverse and higher-accuracy sensing services. | en_US |
dc.description.sponsorship | The paper was partly funded by Jiangsu Major Project on Fundamental Research (Grant No.: BK20243059), Gusu Innovation Project (Grant No.: ZXL2024360), High-Tech District of Suzhou City (Grant No.: RC2025001) and Natural Science Foundation of China (Grant No. 62132004 and 62301122), the Major Program Project of Xiangjiang Laboratory (Grant No. XJ2023001 and XJ2022001), and Qiyuan Lab Innovation Fund (Grant No. 2022-JCJQ-LA-001-088). | - |
dc.format.extent | 1 - 16 | - |
dc.format.medium | Print-Electronic | - |
dc.language.iso | en_US | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.rights | Copyright © 2025 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works ( https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ ). | - |
dc.rights.uri | https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ | - |
dc.subject | integrated multimodal sensing and communications | en_US |
dc.subject | semantic communication | en_US |
dc.subject | large language model | en_US |
dc.subject | multi-task learning | en_US |
dc.title | SIMAC: A Semantic-Driven Integrated Multimodal Sensing And Communication Framework | en_US |
dc.type | Article | en_US |
dc.identifier.doi | https://doi.org/10.1109/jsac.2025.3610398 | - |
dc.relation.isPartOf | IEEE Journal on Selected Areas in Communications | - |
pubs.issue | 0 | - |
pubs.publication-status | Published | - |
pubs.volume | 00 | - |
dc.identifier.eissn | 1558-0008 | - |
dc.rights.holder | Institute of Electrical and Electronics Engineers (IEEE) | - |
Appears in Collections: | Dept of Computer Science Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2025 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works ( https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/ ). | 7.2 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.