Abstract

Computational efficiency is a direction worth considering in moving edge computing (MEC) systems. However, the computational efficiency of UAV-assisted MEC systems is rarely studied. In this paper, we maximize the computational efficiency of the MEC network by optimizing offloading decisions, UAV flight paths, and allocating users’ charging and offloading time reasonably. The method of deep reinforcement learning is used to optimize the resources of UAV-assisted MEC system in complex urban environment, and the user’s computation-intensive tasks are offloaded to the UAV-mounted MEC server, so that the overloaded tasks in the whole system can be alleviated. We study and design a framework algorithm that can quickly adapt to task offload decision making and resource allocation under changing wireless channel conditions in complex urban environments. The optimal offloading decisions from state space to action space is generated through deep reinforcement learning, and then the user’s own charging time and offloading time are rationally allocated to maximize the weighted sum computation rate. Finally, combined with the radio map to optimize the UAC trajectory to improve the overall weighted sum computation rate of the system. Simulation results show that the proposed DRL+TO framework algorithm can significantly improve the weighted sum computation rate of the whole MEC system and save time. It can be seen that the MEC system resource optimization scheme proposed in this paper is feasible and has better performance than other benchmark schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call