Abstract

Fog radio access network (F-RAN) has been recently proposed to satisfy the low-latency communication requirements of Internet of Things (IoT) applications. We consider the problem of sequentially allocating the limited resources of a fog node to a heterogeneous population of IoT applications with varying latency requirements. Specifically, for each service request, the fog node needs to decide whether to serve that user locally to provide it with low-latency communication service or to refer it to the cloud control center to keep the limited fog resources available for future users. We formulate the problem as a Markov Decision Process (MDP), for which we present the optimal decision policy through Reinforcement Learning (RL). The proposed resource allocation method learns from the IoT environment how to strike the right balance between two conflicting objectives, maximizing the total served utility and minimizing the idle time of the fog node. Extensive simulation results for various IoT environments corroborate the theoretical underpinnings of the proposed RL-based resource allocation method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call