Abstract
Recently, research on unmanned aerial vehicles (UAVs) has increased significantly. UAVs do not require pilots for operation, and UAVs must possess autonomous flight capabilities to ensure that they can be controlled without a human pilot on the ground. Previous studies have mainly focused on rule-based methods, which require specialized personnel to create rules. Reinforcement learning has been applied to research on UAV autonomous flight; however, it does not include six-degree-of-freedom (6-DOF) environments and lacks realistic application, resulting in difficulties in performing complex tasks. This study proposes a method of efficient learning by connecting two different maneuvering methods using modular learning for autonomous UAV flights. The proposed method divides complex tasks into simpler tasks, learns them individually, and then connects them in order to achieve faster learning by transferring information from one module to another. Additionally, the curriculum learning concept was applied, and the difficulty level of individual tasks was gradually increased, which strengthened the learning stability. In conclusion, modular learning and curriculum learning methods were used to demonstrate that UAVs can effectively perform complex tasks in a realistic, 6-DOF environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Drones
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.