Abstract

Autonomous UAV (Unmanned Aerial Vehicle) is an ideal platform for navigation in complex and remote environments like inspection, mapping, monitoring and rescue. UAV control mechanisms tend to be designed for specific tasks; new tasks often require new controls. Previous research has shown RL (Reinforcement Learning) to be a feasible solution by autonomously learning a general control scheme without explicit engineer knowledge. However, a major drawback of RL has been the requirement to obtain many iterative training sequences, i.e. trial and error exploration of a robot. The main contribution of this paper is a rapid model-based RL method that combines a PID (Proportional, Integral and Differential) control approach with an RL algorithm in a hybrid manner. PID control is one of the most practical control methods and has been used for a century by both academy and industry since it reduces dynamic model demands and costly gain-tuning effort. This hybrid RL method can amplify synergy between the learning convergence speed and the control performance, which I evaluate with a simulation of path planning for a nano-UAV.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.