Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27173
Title: Large AI Model Empowered Multimodal Semantic Communications
Authors: Jiang, F
Peng, Y
Dong, L
Wang, K
Yang, K
Pan, C
You, X
Keywords: semantic communication;large AI models;LLM;MLM;knowledgebase
Issue Date: 3-Sep-2023
Publisher: Cornell University
Citation: Jiang, F. et al. (2023) 'Large AI Model Empowered Multimodal Semantic Communications', arXiv:2309.01249v1 [cs.AI], pp. 1 - 8. doi: 10.48550/arXiv.2309.01249.
Abstract: Multimodal signals, including text, audio, image and video, can be integrated into Semantic Communication (SC) for providing an immersive experience with low latency and high quality at the semantic level. However, the multimodal SC has several challenges, including data heterogeneity, semantic ambiguity, and signal fading. Recent advancements in large AI models, particularly in Multimodal Language Model (MLM) and Large Language Model (LLM), offer potential solutions for these issues. To this end, we propose a Large AI Model-based Multimodal SC (LAM-MSC) framework, in which we first present the MLM-based Multimodal Alignment (MMA) that utilizes the MLM to enable the transformation between multimodal and unimodal data while preserving semantic consistency. Then, a personalized LLM-based Knowledge Base (LKB) is proposed, which allows users to perform personalized semantic extraction or recovery through the LLM. This effectively addresses the semantic ambiguity. Finally, we apply the Conditional Generative adversarial networks-based channel Estimation (CGE) to obtain Channel State Information (CSI). This approach effectively mitigates the impact of fading channels in SC. Finally, we conduct simulations that demonstrate the superior performance of the LAM-MSC framework.
Description: The file on this repository is an arXiv preprint. It has not been certified by peer review. It may be submitted to a journal for publication and replaced by the authors' accepted manuscript in due course.
URI: https://bura.brunel.ac.uk/handle/2438/27173
DOI: https://doi.org/10.48550/arXiv.2309.01249
Other Identifiers: ORCID iD: Kezhi Wang https://orcid.org/0000-0001-8602-0800
arXiv:2309.01249v1 [cs.AI]
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
Preprint.pdfThe file on this repository is an arXiv preprint. It has not been certified by peer review. It may be submitted to a journal for publication and replaced by the authors' accepted manuscript in due course.4.09 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.