Abstract

In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consumption of the overall process. Due to the complexity and computation of the mathematical model in the algorithm being high, due to trained DNN the complexity and computation are minimized in the proposed work. We propose a comprehensive cost function, which depends on various delays, energy consumption, radio resources, and computation resources. Furthermore, the cost function also depends on energy consumption and delay due to the task-division-process in partial offloading. None of the literature work considers the partitioning along with the computational offloading policy, and hence, the time and energy consumption due to task-division-process are ignored in the cost function. The proposed work considers all the important parameters in the cost function and generates a comprehensive training dataset with high computation and complexity. Once we get the training dataset, then the complexity is minimized through trained DNN which gives faster decision making with low energy consumptions. Simulation results demonstrate the superior performance of the proposed technique with high accuracy of the DNN in deciding offloading policy and partitioning of a task with minimum delay and energy consumption for UE. More than 70% accuracy of the trained DNN is achieved through a comprehensive training dataset. The simulation results also show the constant accuracy of the DNN when the UEs are moving which means the decision making of the offloading policy and partitioning are not affected by the mobility of UEs.

Highlights

  • Computational capabilities of user equipments (UEs) have increased over recent years

  • We formulate a comprehensive cost function, which considers multiple parameters, namely, network fluctuations and computing resources of mobile edge server (MES), propagation delay, the time delays, and energy consumptions due to partitioning, transmission, execution, and reception; Through extensive simulation results we demonstrate the superiority of the proposed technique, compared with total offloading technique (TOT), random offloading technique (ROT), deep learning-based offloading technique (DOT), and energy efficient deep learning-based offloading technique (EEDOT), in terms of energy consumption and execution delay of UEs; The UEs can use the trained deep neural network (DNN) to find the offloading policy and partitioning for n number of components with minimum cost

  • Since the cost function depends on both energy consumption and time delay, the end-user will consume minimum energy with faster decisions on selecting the best partitioning and offloading policy for n number of components per task

Read more

Summary

Introduction

Computational capabilities of user equipments (UEs) have increased over recent years. UEs still have limited computational and battery resources due to the complex and energy-hungry applications [1,2,3]. The delay sensitive nature of these applications has resulted in an increasingly high computing demand and energy consumption. To reduce the energy consumption and service delay of UEs, a new paradigm, known as mobile edge computing (MEC) has been introduced [4,5]. MEC offers computing power and storage capacity to UEs at the edge of wireless networks. In MEC, the UEs offload the compute-intensive and delaysensitive applications to the mobile edge server (MES) through wireless communication to minimize the serving delay and energy consumption of UEs as it is difficult for an UE with limited computation and storage resources to meet the requirements of such compute-intensive applications. Battery lifetime is the main constraint of UEs and with local computing UEs may not have better quality of experience

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call