Abstract

Mobile applications are progressively becoming more sophisticated and complex, increasing their computational requirements. Traditional offloading approaches that use exclusively the Cloud infrastructure are now deemed unsuitable due to the inherent associated delay. Edge Computing can address most of the Cloud limitations at the cost of limited available resources. This bottleneck necessitates an efficient allocation of offloaded tasks from the mobile devices to the Edge. In this paper, we consider a task offloading setting with applications of different characteristics and requirements, and propose an optimal resource allocation framework leveraging the amalgamation of the edge resources. To balance the trade-off between retaining low total energy consumption, respecting end-to-end delay requirements and load balancing at the Edge, we additionally introduce a Markov Random Field based mechanism for the distribution of the excess workload. The proposed approach investigates a realistic scenario, including different categories of mobile applications, edge devices with different computational capabilities, and dynamic wireless conditions modeled by the dynamic behavior and mobility of the users. The framework is complemented with a prediction mechanism that facilitates the orchestration of the physical resources. The efficiency of the proposed scheme is evaluated via modeling and simulation and is shown to outperform a well-known task offloading solution, as well as a more recent one.

Highlights

  • The proliferation of telecommunications in the last decade has offered a plethora of new applications and features to the end-users

  • By having a set of virtual machines (VMs) flavors corresponding to different core allocations and maximum throughput, we provide a better level of accuracy than using a single Linear Time-Invariant (LTI) model for the whole operation

  • The results reveal that the proposed Markov Random Fields (MRF) solution converges rapidly compared to the solution proposed in [12], which has a direct effect on our mean execution times being significantly lower

Read more

Summary

Introduction

The proliferation of telecommunications in the last decade has offered a plethora of new applications and features to the end-users. The evolution of wireless communications is accompanied with computationally powerful devices, applications still need to fully or partially offload the involved computational tasks. An efficient way to enable task-offloading and energy savings is to leverage the abundant resources available in the Cloud. This mobile-to-Cloud interconnection can facilitate the execution of computationally-intensive and data-driven processing tasks in a relatively low-cost and effective manner [3]. For task offloading of the end-devices can generate two major issues: high transmission latency and capacity-demand mismatch, i.e., resource overprovisioning, which leads to resource and energy waste [4]. The Edge Computing (EC) approach, which pushes computing capabilities at the Edge of the network, is being rapidly adopted and seems promising in terms of achieving the ambitious millisecond-scale latency required in various 5G and IoT applications [5]

Motivation & Challenges
Related Work
Mobility Prediction for Task Offloading
System Model
Task Offloading
VM Flavor Design
Power Modeling
User Density and Workload Prediction
Stage 1—Resource Allocation Optimization
Stage 2—Inter-Site Redistribution of Excess Workload
ENERDGE Core Algorithm
Result
Performance Evaluation
Smart Museum Experiment Setting
Resource Allocation Evaluation
User Density Prediction Impact
Stage 1 Evaluation—Response to Dynamic Network Conditions
Stage 2 Evaluation—MRF-Based Excess Workload Redistribution Analysis
Two-Stage Approach Comparison
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call