Abstract
In a disaster situation, rapid detection of victims is critical to prevent life-threatening accidents. However, it is not easy to detect the target's sound due to noise and interference which are difficult to predict. In addition, it is difficult to collect acoustic data in disaster situations. Therefore, this study proposes a target detection scheme in which untrained noise is added through a binary classification model trained only with a clean target signal and additive white Gaussian noise (AWGN). The feature uses the Mel spectrogram of the signal, and the model adopts a convolutional neural network (CNN). Two acoustic sensors are used in this study to adaptively remove environmental noise and a CNN to detect target signals. The first process trains a CNN using the target signal and AWGN data. In the next step, environmental noise is removed by compensating for the gain difference and delay characteristics between the two acoustic sensors through adaptive filtering. The input signal passes through preprocessing to eliminate noise through an adaptive filter. This signal is converted into a Mel spectrogram, which is then detected through a CNN model. The simulation results show that the model improves from 50% to 99% in detection rate at SNR 0 dB. In a catastrophe, there are many sounds that are difficult to discern. However, the model cannot learn from noise. Therefore, it shows that the target detection method using adaptive filter-based noise cancellation is effective in disaster situations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.