Abstract
Aiming at an intelligent perception and obstacle avoidance of UAV in an environment, a UAV visual flight control method based on deep reinforcement learning is proposed in this paper. The method employs Gate Recurrent Unit (GRU) to the UAV flight control decision network, and uses Deep Deterministic Policy Gradient (DDPG), a deep reinforcement learning algorithm to train the network. The special gates structure of GRU is utilized to memorize historical information, and acquire the variation law of the environment of UAV from the time series data including image information of obstacles, UAV position and speed information to realize a dynamic perception of obstacles. Moreover, the basic framework and training method of the network are introduced, and the generalization ability of the network is tested. The experimental results show that the proposed method has better generalization ability and better adaptability to the environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.