Abstract

In recent times, Mobile Edge Computing (MEC) has emerged as a new paradigm allowing low-latency access to services deployed on edge nodes offering computation, storage and communication facilities. Vendors deploy their services on MEC servers to improve performance and mitigate network latencies often encountered in accessing cloud services. A service placement policy determines which services are deployed on which MEC servers. A number of mechanisms exist in literature to determine the optimal placement of services considering different performance metrics. However, for applications designed as microservice workflow architectures, service placement schemes need to be re-examined through a different lens owing to the inherent interdependencies which exist between microservices. Indeed, the dynamic environment, with stochastic user movement and service invocations, along with a large placement configuration space makes microservice placement in MEC a challenging task. Additionally, owing to user mobility, a placement scheme may need to be recalibrated, triggering service migrations to maintain the advantages offered by MEC. Existing microservice placement and migration schemes consider on-demand strategies. In this work, we take a different route and propose a Reinforcement Learning based proactive mechanism for microservice placement and migration. We use the San Francisco Taxi dataset to validate our approach. Experimental results show the effectiveness of our approach in comparison to other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call