Abstract

Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added delays by all means. In this paper, we propose an integrated edge platform that comprises orchestration methods with such objectives, in terms of handling the deployment of both functions and data. We show how the integration of the function orchestration solution with the adaptive data placement of a distributed key–value store can lead to decreased end-to-end latency even when the mobility of end devices creates a dynamic set of requirements. Along with the necessary monitoring features, the proposed edge platform is capable of serving the nomad users of novel applications with low latency requirements. We showcase this capability in several scenarios, in which we articulate the end-to-end latency performance of our platform by comparing delay measurements with the benchmark of a Redis-based setup lacking the adaptive nature of data orchestration. Our results prove that the stringent delay requisites necessitate the close integration that we present in this paper: functions and data must be orchestrated in sync in order to fully exploit the potential that the proximity of edge resources enables.

Highlights

  • Cloud computing is widely used in the digital industry as a technology that enables cheap and easy deployment for online web services and for big data processing, Industry 4.0 and Internet of Things (IoT) applications

  • I.e., edge nodes are potentially closer to the end users, we argue that an edge computing platform must be locality-aware and proactive in terms of resource provisioning in order to further decrease end-to-end latency for application users

  • Our platform builds on our previous prototypes, implementing an application layout optimization and deployment framework [8] and a distributed key–value store [7], respectively. (ii) We evaluate this integrated edge platform through a comparison focusing on data access in our adaptive data store [9] and in Redis [10] for use-cases where heavy user mobility is assumed across edge sites. (iii) We show that the access patterns generated by our autonomous transport-inspired use-cases call for the need of synchronization between function and data placement and that our integrated solution achieves better performance, leveraging this functionality

Read more

Summary

Introduction

Cloud computing is widely used in the digital industry as a technology that enables cheap and easy deployment for online web services and for big data processing, Industry 4.0 and Internet of Things (IoT) applications. Numerous projects managed by companies and academic institutions have built FaaS platforms, but the most widely used ones are underneath the FaaS services offered by IT giants, i.e., Amazon’s AWS Lambda [3], Google Cloud Functions [4] and Microsoft Azure Functions [5] Most of these platforms operate with container technologies; the user’s executable code is packed into a container that is instantiated when the appropriate function call request first arrives. (i) We showcase an integrated platform that simultaneously provides cost-optimal, latency-aware placement of FaaS functions and access pattern-aware data relocation.

Use-Cases Involving User Mobility
Related Work
The Proposed Edge Platform
Computing Optimization
Data Optimization
Results
Size Dependency of Data Access Delay
Placement Dependency of Data Access Delay
Effects of Function Relocation
Aggregated Results
ABDB read Redis read
Data Access Simulation of a Complex Automotive Application
Discussion
Microsoft Azure
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call