Abstract

Deep neural networks in deep learning have been widely demonstrated to have higher accuracy and distinct advantages over traditional machine learning methods in extracting data features. While convolutional neural networks (CNNs) have shown great success in feature extraction and audio classification, it is important to note that real-time audios are dependent on previous scenes. Also, the main drawback of deep learning algorithms is that they need a huge number of datasets to indicate their efficient performance. In this paper, a recurrent neural network (RNN) combined with CNN is proposed to address this problem. Moreover, a Deep Convolutional Generative Adversarial Network (DCGAN) is used for high-quality data augmentation. This data augmentation technique is applied to the UrbanSound8K dataset to improve the environmental sound classification. Batch normalization, transfer learning, and three feature representations map are used to improve the model accuracy. The results show that the generated images by DCGAN have similar features to the original training images and has the capability to generate spectrograms and improve the classification accuracy. Experimental results on UrbanSound8K datasets demonstrate that the proposed CNN-RNN architecture achieves better performance than the state-of-the-art classification models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.