Abstract

Edge computing paradigm has been proposed to support latency-sensitive applications such as Augmented Reality (AR)/ Virtual Reality(VR) and online gaming, by placing computing resources close to where they are most demanded, at the edge of the network. Many solutions have proposed to deploy virtual resources as close as possible to the consumers using virtual machines and containers. However, the most popular container orchestration tools, e.g., Docker Swarm and Kubernetes, do not take into account the locality aspect during deployment, resulting in poor location choices at the edge of the network. In this paper, we propose an edge deployment strategy to tackle the lack of locality awareness of the container orchestrator. In this strategy, the orchestrator collects information about latency and the real-time resource consumption from the current container deployments, providing a bird's-eye view of the most demanded locations and the best places for deployment to cover the largest number of clients. We evaluated the proposed model using 16 AWS regions across the globe and compared to the standard deployment strategies. The experimental results show our edge strategy reduces the average latency between serving container to the clients by up to 4 times compared to the standard deployment algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.