Abstract

With the ongoing advancement of electric power Internet of Things (IoT), traditional power inspection methods face challenges such as low efficiency and high risk. Unmanned aerial vehicles (UAVs) have emerged as a more efficient solution for inspecting power facilities due to their high maneuverability, excellent line-of-sight communication capabilities, and strong adaptability. However, UAVs typically grapple with limited computational power and energy resources, which constrain their effectiveness in handling computationally intensive and latency-sensitive inspection tasks. In response to this issue, we propose a UAV task offloading strategy based on deep reinforcement learning (DRL), which is designed for power inspection scenarios consisting of mobile edge computing (MEC) servers and multiple UAVs. Firstly, we propose an innovative UAV-Edge server collaborative computing architecture to fully exploit the mobility of UAVs and the high-performance computing capabilities of MEC servers. Secondly, we established a computational model concerning energy consumption and task processing latency in the UAV power inspection system, enhancing our understanding of the trade-offs involved in UAV offloading strategies. Finally, we formalize the task offloading problem as a multi-objective optimization issue and simultaneously model it as a Markov Decision Process (MDP). Subsequently, we proposed a task offloading algorithm based on a Deep Deterministic Policy Gradient (OTDDPG) to obtain the optimal task offloading strategy for UAVs. The simulation results demonstrated that this approach outperforms baseline methods with significant improvements in task processing latency and energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call