Abstract
The need to cope with the continuously growing number of connected users and the increased demand for mobile broadband services in the Internet of Things has led to the notion of introducing the fog computing paradigm in fifth generation (5G) mobile networks in the form of fog radio access network (F-RAN). The F-RAN approach emphasises bringing the computation capability to the edge of the network so as to reduce network bottlenecks and improve latency. However, despite the potential, the management of computational resources remains a challenge in F-RAN architectures. Thus, this paper aims to overcome the shortcomings of conventional approaches to computational resource allocation in F-RANs. Reinforcement learning (RL) is presented as a method for dynamic and autonomous resource allocation, and an algorithm is proposed based on Q-learning. RL has several benefits in resource allocation problems and simulations carried out show that it outperforms reactive methods. Furthermore, the results show that the proposed algorithm improves latency and thus has the potential to have a major impact in 5G applications, particularly the Internet of Things.
Highlights
The forthcoming ubiquity of the Internet of Things (IoT) in everyday life, combined with the continuously growing number of connected users and the increased demand for mobile broadband services, have created a challenge for current cellular networks and necessitate an essential change in the way in which wireless networks are designed and modelled [1]
Role in enabling a better-connected networked society. 5G is anticipated to provide new opportunities that enable us to deliver unprecedented applications and services that can support new users and devices. These applications encompass massive machine-type communications- known as the Internet of Things (IoT), enhanced mobile broadband requiring high data rates over a wide coverage area, and ultra-reliable and low-latency communications (URLLC) with stringent requirements on latency and reliability [3], [4]
The performance evaluation through simulation modelling seeks to determine the efficacy of the proposed reinforcement learning algorithm regarding resource allocation in a 5G fog radio access network (F-RAN) architecture
Summary
The forthcoming ubiquity of the Internet of Things (IoT) in everyday life, combined with the continuously growing number of connected users and the increased demand for mobile broadband services, have created a challenge for current cellular networks and necessitate an essential change in the way in which wireless networks are designed and modelled [1]. This challenge, which is eminent when considering the need to deal with the exponential amounts of data produced at the edge of the network, is further exacerbated by the current network state, which is both extremely heterogeneous and immensely fragmented [2].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.