Abstract

Unmanned Aerial Vehicle (UAV)-empowered edge computing has been widely investigated in obstacle-free scenarios, where a moving UAV is in charge of handling offloaded singleton tasks from mobile devices on the ground. However, little attention has been paid to the scenario, in which the UAV serves a complex area with multiple obstacles and dependent tasks. A dependent task can be formulated as a Directed Acyclic Graph (DAG) that contains a number of sub-tasks; and each sub-task can be executed by a corresponding Service Function (SF) deployed on the UAV. In this backdrop, the joint UAV trajectory planning, DAG task scheduling, and SF deployment is formulated as an optimization problem in this paper. Afterwards, a Deep Reinforcement Learning (DRL)-based algorithm is presented to tackle the NP-hard problem. The state space, action space, and the reward function of the agent, i.e., the UAV, are defined respectively under the DRL framework. To evaluate the effectiveness of the proposal, a series of experiments is conducted with different parameter settings. Results show that the DRL-based algorithm performs much better than three heuristic algorithms in success rate of trajectory planning, the number of executed tasks, and the average task response latency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call