Abstract

Polyphonic sound event detection (SED) is the task of detecting the time stamps and the class of sound event that occurred during a recording. Real life sound events overlap in recordings, and their durations vary dramatically, making them even harder to recognize. In this paper, we propose Convolutional Recurrent Neural Networks (CRNNs) to extract hidden state feature representations; then, a self-attention mechanism using a symmetric score function is introduced to memorize long-range dependencies of features that the CRNNs extract. Furthermore, we propose to use memory-controlled self-attention to explicitly compute the relations between time steps in audio representation embedding. Then, we propose a strategy for adaptive memory-controlled self-attention mechanisms. Moreover, we applied semi-supervised learning, namely, mean teacher–student methods, to exploit unlabeled audio data. The proposed methods all performed well in the Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 Sound Event Detection in Real Life Audio (task3) test and the DCASE 2021 Sound Event Detection and Separation in Domestic Environments (task4) test. In DCASE 2017 task3, our model surpassed the challenge’s winning system’s F1-score by 6.8%. We show that the proposed adaptive memory-controlled model reached the same performance level as a fixed attention width model. Experimental results indicate that the proposed attention mechanism is able to improve sound event detection. In DCASE 2021 task4, we investigated various pooling strategies in two scenarios. In addition, we found that in weakly labeled semi-supervised sound event detection, building an attention layer on top of the CRNN is needless repetition. This conclusion could be applied to other multi-instance learning problems.

Highlights

  • Polyphonic sound event detection (SED) is the task of detecting the time stamps and the class of sound event that occurred during a recording

  • Since there are no general rules for choosing attention width, and in order to overcome the limitation of a fixed attention width, we propose adaptively controlling the attention width

  • We first heuristically chose a set of fixed length attention widths for both the DCASE 2017 supervised model and the DCASE 2021 semisupervised model

Read more

Summary

Introduction

Polyphonic sound event detection (SED) is the task of detecting the time stamps and the class of sound event that occurred during a recording. We found that in weakly labeled semi-supervised sound event detection, building an attention layer on top of the CRNN is needless repetition. This conclusion could be applied to other multi-instance learning problems. From the typical classification problem that assigns an audio example to one or more classes, e.g., sound scene classification [6], SED requires detection of the time stamps, and the class(es) of sound event in a recording. Recognizing such overlapping sound events is referred to as polyphonic sound event detection [7]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call