Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/31830
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZakariya Saleh, Z-
dc.contributor.authorAbbod, MF-
dc.contributor.authorNilavalan, R-
dc.date.accessioned2025-08-26T08:38:37Z-
dc.date.available2025-08-26T08:38:37Z-
dc.date.issued2025-03-12-
dc.identifierORCiD: Zahraa Zakariya Saleh https://orcid.org/0009-0009-2989-112X-
dc.identifierORCiD: Maysam F. Abbod https://orcid.org/0000-0002-8515-7933-
dc.identifierORCiD: Rajagopal Nilavalan https://orcid.org/0000-0001-8168-2039-
dc.identifier.citationZakariya Saleh, Z., Abbod, M.F. and Nilavalan, R. (2025) 'Intelligent Resource Allocation via Hybrid Reinforcement Learning in 5G Network Slicing', IEEE Access, 2025, 13 pp. 47440 - 47458. doi: 10.1109/ACCESS.2025.3550518.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/31830-
dc.description.abstractManufacturers are focusing on reconfigurable, resilient environments for Industry 5.0 paradigms. Applications like digital twins and mobile robots require communication networks to meet latency, bandwidth, and reliability requirements. Beyond 5G (B5G) networks provide unprecedented communications performance and flexibility through virtualization and network slicing, which generates various logical partitions for particular applications with specific requirements. RAN slicing is an essential section of 5G network slicing due to its vulnerability to errors, affecting its ability to meet stringent reliability requirements. This paper presents a novel framework for optimizing resource allocation in 5G network slicing by integrating Double Deep Q-Network with Prioritized Experience Replay (DDQN-PER) and Pointer Network-based Long Short-Term Memory (PtrNet-LSTM). The proposed framework dynamically adjusts the attention coefficient, balancing Service Satisfaction Level (SSL) and Quality of Experience (QoE), improving system efficiency, spectrum efficiency, and user connectivity across diverse user scenarios. The experiment illustrates that the combined PtrNet-LSTM framework within DDQN-PER outperforms the baseline methods in terms of spectrum efficiency and user connectivity, demonstrating scalability and the potential to address challenges in dynamic wireless networks.en_US
dc.description.sponsorship10.13039/501100006540-Department of Electronic and Electrical Engineering, Brunel University London, U.K.en_US
dc.format.extent47440 - 47458-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCreative Commons Attribution 4.0 International-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectRANen_US
dc.subjectRLen_US
dc.subjectpointer networken_US
dc.subjectLSTMen_US
dc.subjectDDQN-PERen_US
dc.titleIntelligent Resource Allocation via Hybrid Reinforcement Learning in 5G Network Slicingen_US
dc.typeArticleen_US
dc.date.dateAccepted2025-03-05-
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2025.3550518-
dc.relation.isPartOfIEEE Access-
pubs.publication-statusPublished-
pubs.volume13-
dc.identifier.eissn2169-3536-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dcterms.dateAccepted2025-03-05-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2025 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/2.61 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons