Abstract

Accurate and robust drone detection is an important and challenging task. However, on this issue, previous research, whether based on appearance or motion features, has not yet provided a satisfactory solution, especially under a complex background. To this end, the present work proposes a motion-based method termed the Multi-Scale Space Kinematic detection method (MUSAK). It fully leverages the motion patterns by extracting 3D, pseudo 3D and 2D kinematic parameters at three scale spaces according to the keypoints quality and builds three Gated Recurrent Unit (GRU)-based detection branches for drone recognition. The MUSAK method is evaluated on a hybrid dataset named multiscale UAV dataset (MUD), consisting of public datasets and self-collected data with motion labels. The experimental results show that MUSAK improves the performance by a large margin, a 95% increase in average precision (AP), compared with the previous state-of-the-art (SOTA) motion-based methods, and the hybrid MUSAK method, which integrates with the appearance-based method Faster Region-based Convolutional Neural Network (Faster R-CNN), achieves a new SOTA performance on AP metrics (AP, APM, and APS).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call