Abstract

Unmanned Surface Vehicles (USVs) generate a large amount of data that needs to be processed in real time when they work, but they are usually limited by computational and battery resources, so they need to offload their tasks to the edge for processing. However, when numerous USVs offload their tasks to the edge nodes, some offloaded tasks may be thrown due to queuing timeouts. Existing task offloading methods generally consider the latency or the overall system energy consumption caused by the collaborative processing at the edge and end layers, and do not consider the wasted energy when the tasks are thrown. Therefore, to address the above situation, this paper establishes a task offloading model to minimize long-term task latency and energy consumption by jointly considering the requirments of latency and energy-sensitive tasks and the overall load dynamics in the cloud, edge, and end layers. A deep reinforcement learning (DRL)-based Task Offloading with Cloud Edge Jointly Load Balance Optimization algorithm (TOLBO) is proposed to select the best edge server or cloud server for offloading. Simulation results show that the algorithm can improve the utilization of energy consumption of the cloud edge nodes compared with other algorithms. At the same time, it can significantly reduce the task throw rate, average latency, and energy consumption of end devices.

Highlights

  • T HE Unmanned Surface Vehicle (USV) is a kind of intelligent surface ship that can carry out autonomous navigation and mobile surveillance tasks

  • Deep learning (DL) in artificial intelligence has achieved great success in various fields, which provides an opportunity to enhance the intelligence of USVs

  • We propose TOLBO, a Deep Reinforcement Learning (DRL)-based cloud edge collaborative load optimization task offloading algorithm

Read more

Summary

INTRODUCTION

T HE Unmanned Surface Vehicle (USV) is a kind of intelligent surface ship that can carry out autonomous navigation and mobile surveillance tasks. The energy waste caused by some tasks being thrown after computation or offloading is still out of consideration because they cannot meet the latency requirements, which may lead to a waste of power supply and the reducing in USV working hours. Jie et al [8] offloaded tasks to multiple edge servers by deep reinforcement learning algorithms to reduce energy consumption and average computational latency. These studies tend to consider collaboration between edges and ends, or endto-end. The model considers the load level dynamics of the cloud and edge nodes to minimize the expected long-term cost of the task, especially the delay of the task, the penalty of the thrown task, and the energy consumption required to process the task.

RELATED WORKS
Method
DELAY MODEL
ENERGY MODEL
PROBLEM MODELING
DRL MODEL
TRAINING PROCESS
EXPERIMENTAL ANALYSIS
PARAMETER ANALYSIS
Initial input raw data x
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call