Abstract

Cloud computing technologies can not satisfy the requirements of applications on the mobile terminals because of their disadvantages in delay, link load and energy. So Mobile Edge Computing (MEC) is proposed as a kind of novel computing technology. As an important research direction of MEC, service migration methods still have limitations that they cannot learn migration paths and be adaptive in dynamic situation and user movement. In this paper, we propose a novel service migration policy method based on reinforcement learning. We firstly investigate user movement, four different edge network situations and traditional migration policies. Then we formulate the system requirements by Satisfiability Modulo Theory (SMT) logic to acquire the migration policy space. We further propose a dynamic-awareness deep Q-learning algorithm to select paths from the policy space iteratively and conduct dynamic awareness to adjust learning rate adaptively. Meanwhile, the optimal convergence of our algorithm is proved theoretically. Finally, the experimental results highlight the effectiveness as migration successful rate, service interruption time and load balance of our method compared to the other solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call