Abstract

Sound event detection (SED) plays an important role in understanding the sounds in different environments. Recent studies on standardized datasets have shown the growing interest of the scientific community in the SED problem, however, these did not pay sufficient attention to the detection of artificial and natural sound. In order to tackle this issue, the present article uses different features in combination for detection of machine-generated and natural sounds. In this article, we trained and compared a Stacked Convolutional Recurrent Neural Network (S-CRNN), a Convolutional Recurrent Neural Network (CRNN), and an Artificial Neural Network Classifier (ANN) using the DCASE 2017 Task-3 dataset. Relative spectral–perceptual linear prediction (RASTA-PLP) and Mel-frequency cepstrum (MFCC) features are used as input to the proposed multi-model. The performance of monaural and binaural approaches provided to the classifier as an input is compared. In our proposed S-CRNN model, we classified the sound events in the dataset into two sub-classes. When compared with the baseline model, our obtained results show that the PLP-based ANN classifier improves the individual error rate (ER) for each sound event, e.g., the error rate (ER) is improved to 0.23 for heavy vehicle events and 0.32 for people walking, and minor gains are shown in other events as compared to the baseline. Our proposed CRNN performs well when compare to the baseline and to our proposed ANN model. Moreover, in cross-validation trials, the results in the evaluation stage demonstrate a significant improvement compared to the best performance of DCASE 2017 Task-3, reducing the ER to 0.11 and increasing the F1-score by 10% in the evaluation dataset. Erosion and dilation were used during post-processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call