Please use this identifier to cite or link to this item:
http://bura.brunel.ac.uk/handle/2438/30163
Title: | Optimising resource allocation for computational offloading in a mobile edge environment |
Authors: | Hussain, Sarfraz |
Advisors: | Li, M Meng, H |
Keywords: | Artificial Intelligence;Multi-tier Reinforcement Learning;Resource management |
Issue Date: | 2024 |
Publisher: | Brunel University London |
Abstract: | With the recent albeit limited rollout of the fifth generation of communications, alongside the widespread adoption of open-source networking solutions based on SDN and NFV technologies, opportunities to define the architecture of 5G over its lifetime have become a hot topic in the industry, both professionally and academically. Despite noticeable advances in bandwidth, services planned to be integrated deep within the architecture of 5G technologies such as Mobile Edge Computing are emerging. The successful allocation of resources is a pivotal component upon which effective, latency-sensitive handling of data will build on to enhance the future of communication. This research makes three significant contributions to the field of Multi-access Edge Computing (MEC). Firstly, it involves testing and validating various network simulation software to identify the most effective tools for simulating MEC environments. The efficiency of these simulators is evaluated to ensure they accurately replicate real-life network scenarios, which is crucial for constructing precise algorithms and determining simulation parameters. Secondly, the study implements a single-layer reinforcement learning (RL) algorithm within the orchestration module of the simulator to optimize network resource allocation. The goal of the algorithm is to reduce latency and task failure rates while increasing efficiency. The RL algorithm is benchmarked against traditional methods like Round Robin and Greedy algorithms, demonstrating significant improvements in network service levels and task success rates. Lastly, the research develops a multi-layer reinforcement learning algorithm based on the initial single-layer approach. This advanced algorithm incorporates replay memory and approximate Q functions within a neural network, addressing various stages of network infrastructure and leveraging previously generated Q tables. These enhancements ensure more efficient and effective network management in MEC environments. |
Description: | This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London |
URI: | https://bura.brunel.ac.uk/handle/2438/30163 |
Appears in Collections: | Electronic and Electrical Engineering Dept of Electronic and Electrical Engineering Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FulltextThesis.pdf | 10.14 MB | Adobe PDF | View/Open |
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.