Abstract
Learning aided methods are popular for designing automatic speech recognition (ASR) systems. Majority of works have used shallow models in combination with mel frequency cepstral coefficients (MFCC) and other features for speech recognition applications. Although these shallow models are effective but incorporating deep features in the mechanism for speech processing applications is necessary to increase the efficiency. Despite of considerable amount of works on the design of deep learning topologies and training paradigms in supervised domain, very few works have concentrated on deep features which are essential to capture detailed information of speech. This work focuses on the generation of deep features using stacked auto-encoder for normal and time shifted telephonic speech samples in Assamese language with mood and dialect variations. Experimental results show that the deep features learned by the stacked auto-encoder performs better while it is configured for Assamese speech recognition with mood and dialect variations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Information and Communication Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.