Abstract

The Deep Neural Networks (DNNs) have recently shown a high performance applied to speech classification tasks. In this paper, we argue that the improved accuracy generated by the Deep Convolutional Neural Network (DCNN) classifier is the result of their ability to extract discriminative representations. They are efficient to the different sources of variability in speech signals.We propose, in this study, a new algorithm, called ST-DCNN in order to classify normal and pathological voices. We demonstrate the improvement of recognizing voices theory with advances in speech features in order to improve the identification pathological voices. The proposed approach operates in two steps: First, we extract scatter wavelet features. Then, we introduce the DCNN for voices classification.The performance of the proposed system is evaluated based on silent and noisy environments using various Signal-to-Noise Ratio (SNR) levels. The results underscore that our proposed system shows better performance using scattering wavelet and DCNN in a silent environment with 99.62% of recognition rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call