Abstract

In light of the quick proliferation of Internet of things (IoT) devices and applications, fog radio access network (Fog-RAN) has been recently proposed for fifth generation (5G) wireless communications to assure the requirements of ultra-reliable low-latency communication (URLLC) for the IoT applications which cannot accommodate large delays. To this end, fog nodes (FNs) are equipped with computing, signal processing and storage capabilities to extend the inherent operations and services of the cloud to the edge. We consider the problem of sequentially allocating the FN's limited resources to IoT applications of heterogeneous latency requirements. For each access request from an IoT user, the FN needs to decide whether to serve it locally at the edge utilizing its own resources or to refer it to the cloud to conserve its valuable resources for future users of potentially higher utility to the system (i.e., lower latency requirement). We formulate the Fog-RAN resource allocation problem in the form of a Markov decision process (MDP), and employ several reinforcement learning (RL) methods, namely Q-learning, SARSA, Expected SARSA, and Monte Carlo, for solving the MDP problem by learning the optimum decision-making policies. We verify the performance and adaptivity of the RL methods and compare it with the performance of the network slicing approach with various slicing thresholds. Extensive simulation results considering 19 IoT environments of heterogeneous latency requirements corroborate that RL methods always achieve the best possible performance regardless of the IoT environment.

Highlights

  • There is an ever-growing demand for wireless communication technologies due to several reasons such as the increasing popularity of Internet of Things (IoT) devices, the widespread use of social networking platforms, the proliferation of mobile applications, and the current lifestyle that has become highly dependent on technology in all aspects

  • We present a novel framework for resource allocation in fog radio access network (F-RAN) for 5G by employing reinforcement learning methods to guarantee the efficient utilization of limited fog nodes (FNs) resources while satisfying the low-latency requirements of IoT applications

  • SYSTEM MODEL We consider the F-RAN structure shown in Fig. 1, in which FNs are connected through the fronthaul to the cloud controller (CC), where a massive computing capability, centralized baseband units (BBUs) and cloud storage pooling are available

Read more

Summary

INTRODUCTION

There is an ever-growing demand for wireless communication technologies due to several reasons such as the increasing popularity of Internet of Things (IoT) devices, the widespread use of social networking platforms, the proliferation of mobile applications, and the current lifestyle that has become highly dependent on technology in all aspects. We present a novel framework for resource allocation in F-RAN for 5G by employing reinforcement learning methods to guarantee the efficient utilization of limited FN resources while satisfying the low-latency requirements of IoT applications. C. CONTRIBUTIONS With the motivation of satisfying the low-latency requirements of heterogeneous IoT applications through F-RAN, we provide a novel framework for allocating limited resources to users that guarantees efficient utilization of the FN’s limited resources. In this paper we propose an MDP formulation for the considered F-RAN resource allocation problem, and investigate the use of various RL methods, Q-learning (QL), SARSA, Expected SARSA (E-SARSA), and Monte Carlo (MC), for learning the optimal fine-grained decision making policies of the MDP problem to improve efficiency.

SYSTEM MODEL
OPTIMAL POLICIES
SIMULATIONS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call