Abstract

Owing to the flexibility and controllability, unmanned aerial vehicle (UAV) is frequently integrated into mobile edge computing (MEC) network to improve the system performance. This paper investigates a novel multi-user multi-hotspot MEC network supported by a UAV, where the UAV can help compute the tasks offloaded from end users (EUs) in multiple hotspots. In this network, we consider the task priority and task size are dynamic, due to the EUs’ demands. We then propose a task priority-based system utility model to evaluate the network performance, which considers the priorities of tasks based on the urgent or non-urgent level. We further formulate a utility maximization problem that jointly optimizes the UAV’s access path and the EUs’ offloading strategy, while satisfying the constraints related to the UAV’s battery capacity and UAV’s duration of flight. Since the formulated problem is a NP-hard problem, we present a deep reinforcement learning (DRL) based scheme as a solution. The DRL scheme utilizes principles from reinforcement learning to address the optimization problem effectively, resulting in a dynamic solution. Simulation results demonstrate that the proposed DRL scheme outperforms alternative benchmark schemes in terms of system utility.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.