Please use this identifier to cite or link to this item:
Title: Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System
Authors: Gaballa, M
Abbod, M
Keywords: DRL;DQN;Q-learning;LSTM;NOMA
Issue Date: 6-Nov-2023
Publisher: MDPI
Citation: Gaballa, M. and Abbod, M. (2023) 'Simplified Deep Reinforcement Learning Approach for Channel Prediction in Power Domain NOMA System', Sensors, 23 (21), 9010, pp. 1 - 21. doi: 10.3390/s23219010.
Abstract: Copyright © 2023 by the authors. In this work, the impact of implementing Deep Reinforcement Learning (DRL) in predicting the channel parameters for user devices in a Power Domain Non-Orthogonal Multiple Access system (PD-NOMA) is investigated. In the channel prediction process, DRL based on deep Q networks (DQN) algorithm will be developed and incorporated into the NOMA system so that this developed DQN model can be employed to estimate the channel coefficients for each user device in NOMA system. The developed DQN scheme will be structured as a simplified approach to efficiently predict the channel parameters for each user in order to maximize the downlink sum rates for all users in the system. In order to approximate the channel parameters for each user device, this proposed DQN approach is first initialized using random channel statistics, and then the proposed DQN model will be dynamically updated based on the interaction with the environment. The predicted channel parameters will be utilized at the receiver side to recover the desired data. Furthermore, this work inspects how the channel estimation process based on the simplified DQN algorithm and the power allocation policy, can both be integrated for the purpose of multiuser detection in the examined NOMA system. Simulation results, based on several performance metrics, have demonstrated that the proposed simplified DQN algorithm can be a competitive algorithm for channel parameters estimation when compared to different benchmark schemes for channel estimation processes such as deep neural network (DNN) based long-short term memory (LSTM), RL based Q algorithm, and channel estimation scheme based on minimum mean square error (MMSE) procedure.
Description: Data Availability Statement: Not applicable.
Other Identifiers: ORCID iD: Mohamed Gaballa
ORCID iD: Maysam Abbod
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( MBAdobe PDFView/Open

Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.