Unmanned Aerial Vehicles and the increasing variety of their applications are raising in popularity. The growing number of UAVs, emphasizes the significance of drones’ reliability and robustness. Thus, there is a need for an efficient self-observing sensing mechanism to detect real-time anomalies in drone behavior. Previous works suggested prediction models from control theory, yet, they are complex by nature and hard to implement, while Deep Learning solutions are of great utility. In this paper, we propose a real-time framework to detect anomalies in drones by analyzing the sound emitted from them. For this purpose, we construct a hybrid Deep Learning based Transformer and a Convolutional Neural Network inspired by the well-known VGG architecture. Our approach is examined over a dataset that is collected from a single microphone set located on a micro drone in real-time. Our approach achieves an F1-score of 88.4% in detecting anomalies and outperforms the VGG-16 architecture. Moreover, the framework presented in this paper reduces the number of parameters of the well-known VGG-16 from 138M, into a shrunk version with 3.6M parameters only. Additionally, our real-time approach, results in a smaller number of parameters in the neural network, and yet yields high accuracy in anomaly detection in drones with an average inference time of 0.2 seconds per second. Moreover, with an earphone that weighs less than 100 grams on top of the UAV, our method is shown to be beneficial, even in extreme conditions such as a micro-size dataset that is composed of three hours of flight recordings. The presented self-observing method can be implemented by simply adding a microphone to drones and transmitting the captured audio for analysis to the remote control or performing it onboard the drone using a dedicated microcontroller.
Read full abstract