Abstract

Migrating service to certain vantage locations that are close to its clients can not only reduce the service access latency,but also minimize the network costs for its service provider. As such, this problem is particularly important for time-bounded services to achieve both enhanced QoS and cost effectiveness as well. However, the service migration is not free, coming at costs of bulk-data transfer and likely service disruption, as a result, increasing the overall service costs. To gain the benefits of service migration while minimizing service costs, in this paper, we leverage reinforecement learning (RL) methods to propose an efficient algorithm, called Mig- RL, for the service migration in a cloud environment. The Mig-RL utilizes an agent to learn the optimal policy that determines service migration status by using a typical RL algorithm, called Q-learning. Specifically, the agent learns from the historical access information to decide when and to where the service should be migrated, without requiring any prior information regarding the service accesses. Therefore, the agent can dynamically adapt to the environment and achieve online migration in real time. Experimental results on the real and synthesized access sequences from cloud networks show that Mig-RL can minimize the service costs, and in the meantime, improve the quality of service (QoS) by adapting to the changes of mobile access patterns.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.