Abstract

Edge Computing (EC) is an emerging technology to cope with the unprecedented growth of user demands for access to low-latency computation and content data. However, user mobility and limited coverage of Edge Computing Server (ECS) result in service discontinuity and reduce Quality of Service (QoS). Service migration has a great potential to address this issue. In the scenario of service migration, how to choose the optimal migration strategy and communication strategy is a key challenge. In this paper, we innovatively propose solving the service migration using reinforcement learning based model which can take a long-term goal into consideration and make service migration and communication decisions more efficient. we consider a single-user EC system with exploiting predefined movement of user, where user passes through many ECSs and its corresponding Virtual Machine (VM) in ECS decides the migration strategy and communication strategy. We design a Reinforcement Learning (RL)-based framework for a single-user EC service migration system. Q-learning based and Deep Q Network (DQN) based themes are analyzed in detail respectively. Simulation results shows that our RL-based system can achieve the optimal result compared with other two methods under different system parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.