Abstract

Recently, unmanned aerial vehicle (UAV) is intro-duced into mobile edge computing (MEC) system to help process large-scale task data generated by distributed user devices (UDs). In such UAV-MEC networks, UAV acting as a main MEC server, flies frequently to receive the fully offloaded task data from the associated UDs and determines the partial ratio of task offloaded to the BS according to its battery and computing resource. How-ever, non-stop flight and task offloading may drain the limited on-board battery and further deteriorate the service performance. Hence, we desire an energy-efficient trajectory of UAV and advocate to deploy a charging station for its energy supplement. In order to achieve an optimal trade-off between maximizing the amount of collected data and minimizing the energy consumption of UAV, we formulate a join trajectory planning, communication scheduling, charging scheduling and task offloading problem to maximize the energy efficiency of UAV. We then propose a novel Priority-based Deep reinforcement learning (DRL) approach for the Trajectory planning, Communication scheduling, Charging scheduling and Task offloading of UAV, called PD-TCCT, to solve the continuous online decision-making problem. Finally, the extensive evaluation results illustrate that PD-TCCT outperforms the other baselines under different environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call