Abstract

Mobile edge computing (MEC) has recently emerged as a promising paradigm to meet the increasing computation demands in Internet of Things (IoT). However, due to the limited computation capacity of the MEC server, an efficient computation offloading scheme, which means the IoT device decides whether to offload the generated data to the MEC server, is needed. Considering the limited battery capacity of IoT devices, energy harvesting (EH) is introduced to enhance the lifetime of the IoT systems. However, due to the unpredictability nature of the generated data and the harvested energy, it is a challenging problem when designing an effective computation offloading scheme for the EH MEC system. To cope with this problem, we model the computation offloading process as a Markov decision process (MDP) so that no prior statistic information is needed. Then, reinforcement learning algorithms can be adopted to derive the optimal offloading policy. To address the large time complexity challenge of learning algorithms, we first introduce an after-state for each state–action pair so that the number of states in the formulated MDP is largely decreased. Then, to deal with the continuous state space challenge, a polynomial value function approximation method is introduced to accelerate the learning process. Thus, an after-state reinforcement learning algorithm for the formulated MDP is proposed to obtain the optimal offloading policy. To provide efficient instructions for real MEC systems, several analytical properties of the offloading policy are also presented. Our simulation results validate the great performance of our proposed algorithm, which significantly improves the achieved system reward under a reasonable complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call