Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/31732
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Dong, L | - |
dc.contributor.author | Jiang, F | - |
dc.contributor.author | Peng, Y | - |
dc.contributor.author | Wang, K | - |
dc.contributor.author | Yang, K | - |
dc.contributor.author | Pan, C | - |
dc.contributor.author | Schober, R | - |
dc.date.accessioned | 2025-08-12T15:54:45Z | - |
dc.date.available | 2025-08-12T15:54:45Z | - |
dc.date.issued | 2024-10-22 | - |
dc.identifier | ORCiD: Kezhi Wang https://orcid.org/0000-0001-8602-0800 | - |
dc.identifier.citation | Dong, L. et al. (2025) 'LAMBO: Large AI Model Empowered Edge Intelligence', IEEE Communications Magazine, 63 (4), pp. 88 - 94. doi: 10.1109/MCOM.001.2400076. | en_US |
dc.identifier.issn | 0163-6804 | - |
dc.identifier.uri | http://bura.brunel.ac.uk/handle/2438/31732 | - |
dc.description | The article archived on this institutional repository is a preprint version, available at arXiv:2308.15078v2 [cs.AI], https://arxiv.org/abs/2308.15078 . It has not been certified by peer review. Please consult the published version at: https://doi.org/10.1109/MCOM.001.2400076 . | en_US |
dc.description.abstract | Next-generation edge intelligence is anticipated to benefit various applications via offloading techniques. However, traditional offloading architectures face several issues, including heterogeneous constraints, partial perception, uncertain generalization, and lack of tractability. In this article, we propose a large AI model-based offloading (LAMBO) framework with over one billion parameters for solving these problems. We first use input embedding (IE) to achieve normalized feature representation with heterogeneous constraints and task prompts. Then, we introduce a novel asymmetric encoder-decoder (AED) as the decision-making model, which is an improved transformer architecture consisting of a deep encoder and a shallow decoder for global perception and decision. Next, actor-critic learning (ACL) is used to pre-train the AED for different optimization tasks under corresponding prompts, enhancing the AED's generalization in multi-task scenarios. Finally, we propose an active learning from expert feedback (ALEF) method to fine-tune the decoder of the AED for tracking changes in dynamic environments. Our simulation results validate the advantages of the proposed LAMBO framework. | en_US |
dc.description.sponsorship | This work was supported in part by the National Natural Science Foundation of China under Grant 41904127 and 62132004, in part by the Hunan Provincial Natural Science Foundation of China under Grant 2024JJ5270, in part by the Open Project of Xiangjiang Laboratory under Grant 22XJ03011, in part by the Scientific Research Fund of Hunan Provincial Education Department under Grant 22B0663, and in part by the Changsha Natural Science Foundation under Grants kq2402098 and kq2402162. | en_US |
dc.format.extent | 88 - 94 | - |
dc.language | English | - |
dc.language.iso | en_US | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.rights | Copyright © 2024 The Author(s). The URI http://arxiv.org/licenses/nonexclusive-distrib/1.0/ is used to record the fact that the submitter granted the following license to arXiv.org on submission of an article: I grant arXiv.org a perpetual, non-exclusive license to distribute this article. I certify that I have the right to grant this license. I understand that submissions cannot be completely removed once accepted. I understand that arXiv.org reserves the right to reclassify or reject any submission. [v2] Sat, 3 Aug 2024 13:43:01 UTC (1,544 KB) | - |
dc.rights.uri | https://arxiv.org/licenses/nonexclusive-distrib/1.0/ | - |
dc.subject | large AI model | en_US |
dc.subject | edge intelligence | en_US |
dc.subject | encoder-decoder architecture | en_US |
dc.subject | reinforcement learning | en_US |
dc.subject | active learning | en_US |
dc.title | LAMBO: Large AI Model Empowered Edge Intelligence | en_US |
dc.type | Article | en_US |
dc.identifier.doi | https://doi.org/10.1109/MCOM.001.2400076 | - |
dc.relation.isPartOf | IEEE Communications Magazine | - |
pubs.issue | 4 | - |
pubs.publication-status | Published | - |
pubs.volume | 63 | - |
dc.identifier.eissn | 1558-1896 | - |
dc.rights.holder | The Author(s) | - |
Appears in Collections: | Dept of Computer Science Research Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FullText.pdf | Copyright © 2024 The Author(s). The URI http://arxiv.org/licenses/nonexclusive-distrib/1.0/ is used to record the fact that the submitter granted the following license to arXiv.org on submission of an article: I grant arXiv.org a perpetual, non-exclusive license to distribute this article. I certify that I have the right to grant this license. I understand that submissions cannot be completely removed once accepted. I understand that arXiv.org reserves the right to reclassify or reject any submission. [v2] Sat, 3 Aug 2024 13:43:01 UTC (1,544 KB) | 2.02 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.