Abstract

Task offloading has attracted widespread attention in accelerating applications and reducing energy consumption. However, in areas with surging traffic (nucleic acid testing, concerts, etc.), the limited resources of fixed-base stations cannot meet user requirements. Unmanned aerial vehicles (UAVs) can effectively serve as temporary-base stations or aerial access points for mobile devices (MDs). In the UAV-assisted MEC system, we intend to jointly optimize the trajectory and user association to maximize computational efficiency. This problem is a non-convex fractional problem; therefore, it is not feasible to use only a traditional method, such as Dinkelbach’s method, for solving a fractional problem. Therefore, to facilitate online decision making for this joint optimization problem, we introduce deep reinforcement learning (DRL) and propose a double-layer cycle algorithm for maximizing computation efficiency (DCMCE). Specifically, in the outer loop, we model the trajectory planning problem as a Markov decision process, and use deep reinforcement learning to output the best trajectory. In the inner loop, we use Dinkelbach’s method to simplify the fraction problem, and propose a priority function to optimize user association to maximize computational efficiency. Simulation results show that DCMCE achieves higher computational efficiency than the baseline scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call