Abstract

An emerging application of unmanned aerial vehicles (UAVs) is related to search and rescue operations, where drones are used to detect emergency events, such as people screaming, explosions, or gunshots. As a result, this kind of service can provide critical support and enable a faster response from the first responders to an area of interest. UAVs equipped with acoustic sensors constitute a beneficial surveillance system as opposed to visual sensors that face limitations due to lighting conditions or obstacles to the field of view. In this paper, an acoustic detection system (ADS) capable of identifying such scenarios is presented using a ReSpeaker 4 Microphone Array and a Raspberry Pi 4B. For this purpose, a custom two-dimensional convolutional neural network (CNN) model based on YAMNet architecture has been implemented. The model was trained on public audio datasets on targets related to search and rescue operations. Furthermore, an added class with mixup augmentation for overlapping events was created. The final system was adjusted to run in real-time conditions achieving an overall accuracy of 70.13% in indoor and 70.52% in outdoor environments, respectively, revealing the potential of the solution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.