Abstract

Disaster robotics, in particular, unmanned aerial vehicle (hereinafter, drone) is expected to improve the promptness and effectiveness of search and rescue mission. Although the most common sensor for drones is vision, its drawbacks such as ill-illumination condition and occlusion should be compensated by other modalities. This paper focuses on sound source localization and position estimation by drone audition. The most critical issue of robot audition with a microphone array is the suppression of ego-noise caused by rotors and airflow around the drone. Another issue is handling of multiple sound sources in addition to ego-noise. If multiple sound sources are actually or virtually crossing due to movement of sound sources and/or the drone, tracking may suffer from uncertainty in data association. In addition, user interface is also critical for the mission, which has not been discussed to date. This paper focuses on the design and implementation of real-time visualization of sound source positions for search and rescue mission. The design follows the guideline proposed by Murphy and Tadokoro. As a proof of concept, public demonstrations at ImPACT Tough Robotics Challenge (TRC) are reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call