Abstract

Recently, research on unmanned aerial vehicles (UAVs) has increased significantly. UAVs do not require pilots for operation, and UAVs must possess autonomous flight capabilities to ensure that they can be controlled without a human pilot on the ground. Previous studies have mainly focused on rule-based methods, which require specialized personnel to create rules. Reinforcement learning has been applied to research on UAV autonomous flight; however, it does not include six-degree-of-freedom (6-DOF) environments and lacks realistic application, resulting in difficulties in performing complex tasks. This study proposes a method of efficient learning by connecting two different maneuvering methods using modular learning for autonomous UAV flights. The proposed method divides complex tasks into simpler tasks, learns them individually, and then connects them in order to achieve faster learning by transferring information from one module to another. Additionally, the curriculum learning concept was applied, and the difficulty level of individual tasks was gradually increased, which strengthened the learning stability. In conclusion, modular learning and curriculum learning methods were used to demonstrate that UAVs can effectively perform complex tasks in a realistic, 6-DOF environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call