Abstract

Cloud computing technologies can not satisfy the requirements of applications on the mobile terminals because of their disadvantages in delay, link load and energy. So Mobile Edge Computing (MEC) is proposed as a kind of novel computing technology. As an important research direction of MEC, service migration methods still have limitations that they cannot learn migration paths and be adaptive in dynamic situation and user movement. In this paper, we propose a novel service migration policy method based on reinforcement learning. We firstly investigate user movement, four different edge network situations and traditional migration policies. Then we formulate the system requirements by Satisfiability Modulo Theory (SMT) logic to acquire the migration policy space. We further propose a dynamic-awareness deep Q-learning algorithm to select paths from the policy space iteratively and conduct dynamic awareness to adjust learning rate adaptively. Meanwhile, the optimal convergence of our algorithm is proved theoretically. Finally, the experimental results highlight the effectiveness as migration successful rate, service interruption time and load balance of our method compared to the other solutions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.