Abstract

In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%.

Highlights

  • With the rapid development of IoT and the exponential growth of device scale, traditional cloud computing [1] can no longer meet the service demand of IoT devices

  • Zhao et al [23] found it difficult to achieve a balance between high resource consumption and high communication costs and proposed a local computing offloading method that minimizes the total energy consumption consumed by the terminal devices and edge servers by jointly optimizing the task offloading rate, the CPU frequency of the system, the allocated bandwidth of the available channels, and the transmission power of each device at each time slot

  • We propose an Online Predictive Offloading (OPO) algorithm based on the deep reinforcement learning algorithm to solve the modeling problem, and the specific process of model training is shown in Algorithm 1

Read more

Summary

Introduction

With the rapid development of IoT and the exponential growth of device scale, traditional cloud computing [1] can no longer meet the service demand of IoT devices. Deep reinforcement learning methods can meet the requirement of current IoT dynamic task computing, most algorithms only make the model converge faster and achieve better results by improving the method during model training, while ignoring the optimization of the task inference testing process, which still brings relatively high response delays in real scenarios. To predict task dynamic information in real-time, based on the observed edge network condition and the server load. This algorithm aims to make offloading decisions by taking into account the task processing delays, the task tolerance delays, and the task computation energy consumption, and to avoid causing network congestion and server overloading, minimizing the task dropped rate and reduced the computational cost of the task.

Related Works
Offloading Methods with Different Modeling Objects
Offloading Methods with Different Problem Solving Strategies
System Model
Task Model
Decision Model
Terminal Layer Computing Model
Edge Layer Computing Model
Communication Model
Task Prediction Model
Load Prediction Model
Model Training Phase
Offloading Decision Phase
Algorithm Design
Design of the Reward Function
Experimental Setup
Task Prediction Experiment
Performance Comparison
Impact of the Tasks Number
Impact of the Learning Rate
Simulation of Real-Time Decision
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call