Abstract

Unmanned aerial vehicle (UAV) is one of the preferred tools for coverage detection missions, because of its maneuverability and flexibility. It is challenging for the UAV to decide a track by itself in a complex geometrical environment. This paper presents a UAV intelligent navigation method based on deep reinforcement learning (DRL). We propose using geographic information systems (GIS) as the DRL training environment to overcome the inconsistency between the training environment and the test environment. We creatively save the flight path in the form of an image. The combination of the knowledge-based Monte Carlo tree search method and local search method can not only effectively avoid falling into local search, but also ensure learning the optimal search direction under the limitation of computing power. Experiments show that the trained UAV can find an excellent flight path by intelligent navigation, and able to make effective flight decisions in a complex geometrical environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call