Abstract

Artificial Intelligence (AI) has been used in different research areas in aerospace to create an intelligent system. Especially, an unmanned aerial vehicle (UAV), known as a drone, can be controlled by AI methods such as deep reinforcement learning (DRL) in different purposes. Drones with DRL become more intelligent and eventually they can be fully autonomous. In this paper, DRL method supported by real time object detection model is proposed to detect and catch a drone. Additionally, the results are analyzed by comparing the time to catch the target drone in seconds between DRL method, human pilot and an algorithm which directs the drone towards the target position without using any AI method or navigation and guidance method. The main idea is to catch a drone in an environment as fast as possible without crashing any obstacles inside the environment. In DRL method, the agent is a quadcopter drone and it is rewarded in each time step by the environment provided by Airsim flight simulator. Drone is trained to catch the target drone by using DRL model which is based on deep Q-Network algorithm. After training, the tests have been made by the agent drone with DRL model and human pilots to catch stationary and non-stationary target drone. The training and test results show that the agent drone learns to catch target drone which can be a stationary and a non-stationary. In addition. the agent avoids crashing any obstacles in the environment with a minimum success rate of 94%. Also, DRL model performance is compared with the human pilot performances and the agent with DRL model shows better time to catch the target drone. Human pilots struggle to control the drone by using remote controller when catching the target in simulation. However, the agent with DRL model is rarely missing the target when trying to catch the target

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call