Abstract

Machine learning has experienced a strong growth in recent years, due to increased dataset sizes and computational power, and to advances in deep learning methods that can learn to make predictions in extremely non-linear problem settings. The intense problem of automatic environmental sound classification has received alarming attention from the research community in recent years. In this paper the audio dataset is converted into mass spectrogram using Digital Signal Processing (DSP). The spectrogram thus obtained is fed to the Convolutional Neural Network (CNN) for the classification of the audio signal. In this we present a deep convolutional neural network architecture with localized kernels for environmental sound. By training the network on another additional deformed data, the hope is that the network becomes invariant to all deformations and generalizes better to all unseen data. We show that the proposed DSP in combination with CNN architecture, yields state-of-the-art performance for environmental sound classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call