Abstract

Unmanned aerial vehicles (UAVs) are increasingly gaining in application value in many fields because of their low cost, small size, high mobility and other advantages. In the scenario of traditional cellular networks, UAVs can be used as a kind of aerial mobile base station to collect information of edge users in time. Therefore, UAVs provide a promising communication tool for edge computing. However, due to the limited battery capacity, these may not be able to completely collect all the information. The path planning can ensure that the UAV collects as much data as possible under the limited flight distance, so it is very important to study the path planning of the UAV. In addition, due to the particularity of air-to-ground communication, the flying altitude of the UAV can have a crucial impact on the channel quality between the UAV and the user. As a mature technology, deep reinforcement learning (DRL) is an important algorithm in the field of machine learning which can be deployed in unknown environments. Deep reinforcement learning is applied to the data collection of UAV-assisted cellular networks, so that UAVs can find the best path planning and height joint optimization scheme, which ensures that UAVs can collect more information under the condition of limited energy consumption, save human and material resources as much as possible, and finally achieve higher application value. In this work, we transform the UAV path planning problem into an Markov decision process (MDP) problem. By applying the proximal policy optimization (PPO) algorithm, our proposed algorithm realizes the adaptive path planning of UAV. Simulations are conducted to verify the performance of the proposed scheme compared to the conventional scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call