Abstract

High mobility and an ability of gathering data from large terrains makes Unmanned Aerial Vehicles (UAVs) an excellent platform for placing visual or acoustic sensors. One recently emerging application of UAVs is search and rescue operation, during which drones are used to localize people in distress. A common approach to determine the target position is to rely on visual data recorded by cameras. However, in situations of limited visibility such as in presence of smoke, at night or when a person is trapped under debris, acoustic information can be exploited to perform the localization of people in distress. Solutions based on acoustic information gathered by drone-embedded microphone array are a promising alternative to the methods based on vision, and they are currently being widely examined for UAV applications. The main issues encountered in acoustic source localization using drones include high ego-noise and wind produced by the propellers. This paper investigates the statistical properties of drone’s ego-noise and proposes an algorithm for acoustic source localization which exploits the sparsity of sound sources in time-frequency domain. A comparison of the results obtained by the proposed method and by commonly used approaches clearly shows the benefits of using the proposed processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call