Abstract

Deep learning models are proved efficient for complex learning tasks. Anomalous sound detection is one such complex task for which self-supervised deep architectures are emerging in recent days. Self-supervised deep models efficiently capture the underlying structure of data. Self-supervised anomalous sound detection attempts to distinguish between normal sounds and unidentified anomalous sounds. With the use of appropriate autoencoders, reconstruction error based decision making is effective for anomaly detection in domains such as computer vision. Auditory image (Spectrogram) based representation of sound signals are commonly used in sound event detection. We propose convolutional long short-term memory (CLSTM) Auto Encoder based approach for anomalous sound detection. In this approach, we explore fusion of spectral and temporal features to model characteristics of normal sounds with noises. The proposed approach is evaluated using MIMII dataset and the DCASE Challenge (2020) Task 2—Anomalous sound detection dataset. Experiments on proposed approach reveal significant improvement over the state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call