Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/32972
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Q-
dc.contributor.authorXie, Y-
dc.contributor.authorFan, P-
dc.contributor.authorQin, D-
dc.contributor.authorWang, K-
dc.contributor.authorCheng, N-
dc.contributor.authorLetaief, KB-
dc.date.accessioned2026-03-12T16:34:10Z-
dc.date.available2026-03-12T16:34:10Z-
dc.date.issued2026-02-16-
dc.identifierORCiD: Qiong Wu https://orcid.org/0000-0002-4899-1718-
dc.identifierORCiD: Pingyi Fan https://orcid.org/0000-0002-0658-6079-
dc.identifierORCiD: Dong Qin https://orcid.org/0000-0002-9210-9067-
dc.identifierORCiD: Kezhi Wang https://orcid.org/0000-0001-8602-0800-
dc.identifierORCiD: Nan Cheng https://orcid.org/0000-0001-7907-2071-
dc.identifier.citationWu, Q. et al. (2026) 'Large Language Model-Based Task Offloading and Resource Allocation for Digital Twin Edge Computing Networks', IEEE Transactions on Mobile Computing, 0 (early access), pp. 1–12. doi: 10.1109/tmc.2026.3664866.en-US
dc.identifier.issn1536-1233-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/32972-
dc.description.abstractIn this paper, we propose a general digital twin edge computing network comprising multiple vehicles and a server. Each vehicle generates multiple computing tasks within a time slot, leading to queuing challenges when offloading tasks to the server. The study investigates task offloading strategies, queue stability, and resource allocation. Lyapunov optimization is employed to transform long-term constraints into tractable short-term decisions. To solve the resulting problem, an in-context learning approach based on large language model (LLM) is adopted, replacing the conventional multi-agent reinforcement learning (MARL) framework. Experimental results demonstrate that the LLM-based method achieves comparable or even superior performance to MARL.en-US
dc.description.sponsorshipThis work was supported in part by Jiangxi Province Science and Technol- ogy Development Programme under Grant No. 20242BCC32016, in part by the National Natural Science Foundation of China under Grant No. 61701197, in part by the National Key Research and Development Program of China under Grant No. 2021YFA1000500(4), in part by the Research Grants Council under the Areas of Excellence Scheme under Grant AoE/E601/22R and in part by the 111 Project under Grant No. B23008.en-US
dc.format.extent1–12-
dc.format.mediumPrint-Electronic-
dc.languageen-US-
dc.language.isoenen-US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en-US
dc.rightsCreative Commons Attribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectlarge language modelen-US
dc.subjectdigital twinen-US
dc.subjectresource allocationen-US
dc.subjectedge computingen-US
dc.titleLarge Language Model-Based Task Offloading and Resource Allocation for Digital Twin Edge Computing Networksen-US
dc.typeArticleen-US
dc.identifier.doihttps://doi.org/10.1109/tmc.2026.3664866-
dc.relation.isPartOfIEEE Transactions on Mobile Computing-
pubs.issue0-
pubs.publication-statusPublished-
pubs.volume00-
dc.identifier.eissn1558-0660-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Author(s)-
dc.contributor.orcidWu, Qiong [0000-0002-4899-1718]-
dc.contributor.orcidFan, Pingyi [0000-0002-0658-6079]-
dc.contributor.orcidQin, Dong [0000-0002-9210-9067]-
dc.contributor.orcidWang, Kezhi [0000-0001-8602-0800]-
dc.contributor.orcidCheng, Nan [0000-0001-7907-2071]-
Appears in Collections:Department of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfFor the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.5.64 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons