Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/29713
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSohaib, RM-
dc.contributor.authorOnireti, O-
dc.contributor.authorTan, K-
dc.contributor.authorSambo, Y-
dc.contributor.authorSwash, R-
dc.contributor.authorImran, M-
dc.date.accessioned2024-09-11T19:26:36Z-
dc.date.available2024-09-11T19:26:36Z-
dc.date.issued2024-07-22-
dc.identifierORCiD: Rafiq Swash https://orcid.org/0000-0003-4242-7478-
dc.identifier.citationSohaib, R.M. et al. (2024) 'Meta-Transfer Learning-Based Handover Optimization for V2N Communication', IEEE Transactions on Vehicular Technology, 0 (early access), pp. 1 - 15. doi: 10.1109/TVT.2024.3431875.en_US
dc.identifier.issn0018-9545-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/29713-
dc.description.abstractThe rapid growth of vehicle-to-network (V2N) communication demands efficient handover decision-making strategies to ensure seamless connectivity and maximum throughput. However, the dynamic nature of V2N scenarios poses challenges for traditional handover algorithms. To address this, we propose a deep reinforcement learning (DRL)-based approach for optimizing handover decisions in dynamic V2N communication. We leverages the advantages of transfer learning and meta-learning to generalize across time-evolving source and target tasks. In this paper, we derive generalization bounds for our DRLbased approach, specifically focusing on optimizing the handover process in V2N communication. The derived bounds provide theoretical guarantees on the expected generalization error of the learned handover time function for the target task. To implement our framework, we propose a meta-learning framework, Adaptto-evolve (A2E), based on the double deep Q-networks (DDQN) with Thompson sampling approach. The A2E framework enables quick adaptation to new tasks by minimizing the error upper bounds with divergence measures. Through transfer learning, the meta-learner dynamically evolves its handover decision-making strategy to maximize average throughput while reducing the number of handovers. We use Thompson sampling with the DDQN to balance exploration and exploitation. The DDQN with The Thompson sampling approach, ensuring efficient and effective learning, forms the foundation for optimizing the metatraining process, resulting in improvement in cumulated packet lossby48.02%in highway settings and 46.32%in rural settingsen_US
dc.format.extent1 - 15-
dc.format.mediumPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCopyright © 2024 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. See: https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/-
dc.rights.urihttps://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/post-publication-policies/-
dc.subjectV2Nen_US
dc.subjectDRLen_US
dc.subjectHOen_US
dc.subjectgeneralizationen_US
dc.subjectmeta-learningen_US
dc.titleMeta-Transfer Learning-Based Handover Optimization for V2N Communicationen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/TVT.2024.3431875-
dc.relation.isPartOfIEEE Transactions on Vehicular Technology-
pubs.publication-statusPublished-
dc.identifier.eissn1939-9359-
dc.rights.holderInstitute of Electrical and Electronics Engineers (IEEE)-
Appears in Collections:Brunel Design School Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf5.83 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.