Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27685
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGaballa, M-
dc.contributor.authorAbbod, M-
dc.date.accessioned2023-11-20T21:58:44Z-
dc.date.available2023-11-20T21:58:44Z-
dc.date.issued2023-11-06-
dc.identifierORCID iD: Mohamed Gaballa https://orcid.org/0000-0001-9500-7333-
dc.identifierORCID iD: Maysam Abbod https://orcid.org/0000-0002-8515-7933-
dc.identifier9010-
dc.identifier.citationGaballa, M. and Abbod, M. (2023) 'Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System', Sensors, 23 (21), 9010, pp. 1 - 21. doi: 10.3390/s23219010.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27685-
dc.descriptionData Availability Statement: Not applicable.en_US
dc.description.abstractCopyright © 2023 by the authors. In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be developed and incorporated into the NOMA system so that this developed DQN model can be employed to estimate the channel coefficients for each user device in NOMA system. The developed DQN scheme will be structured as a simplified approach to efficiently predict the channel parameters for each user in order to maximize the downlink sum rates for all users in the system. In order to approximate the channel parameters for each user device, this proposed DQN approach is first initialized using random channel statistics, and then the proposed DQN model will be dynamically updated based on the interaction with the environment. The predicted channel parameters will be utilized at the receiver side to recover the desired data. Furthermore, this work inspects how the channel estimation process based on the simplified DQN algorithm and the power allocation policy, can both be integrated for the purpose of multiuser detection in the examined NOMA system. Simulation results, based on several performance metrics, have demonstrated that the proposed simplified DQN algorithm can be a competitive algorithm for channel parameters estimation when compared to different benchmark schemes for channel estimation processes such as deep neural network (DNN) based long-short term memory (LSTM), RL based Q algorithm, and channel estimation scheme based on minimum mean square error (MMSE) procedure.en_US
dc.description.sponsorshipThis research received no external funding.en_US
dc.format.extent1 - 21-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherMDPIen_US
dc.rightsCopyright © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).-
dc.subjectDRLen_US
dc.subjectDQNen_US
dc.subjectQ-learningen_US
dc.subjectLSTMen_US
dc.subjectNOMAen_US
dc.titleSimplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA Systemen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.3390/s23219010-
dc.relation.isPartOfSensors-
pubs.issue21-
pubs.publication-statusPublished online-
pubs.volume23-
dc.identifier.eissn1424-8220-
dc.rights.holderThe authors-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).3.79 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.