Abstract
Automatic speech recognition (ASR) has moved from science-fiction fantasy to daily reality for citizens of technological societies. Some people seek it out, preferring dictating to typing, or benefiting from voice control of aids such as wheel-chairs. Others find it embedded in their hi-tec gadgetry – in mobile phones and car navigation systems, or cropping up in what would have until recently been human roles such as telephone booking of cinema tickets. Wherever you may meet it, computer speech recognition is here, and it’s here to stay. Most of the automatic speech recognition (ASR) systems are based on hidden Markov Model in which Guassian Mixturess model is used. The output of this model depends on subphone states. Dynamic information is typically included by appending time-derivatives to feature vectors. This approach was quite successful. This approach makes the false assumption of framewise independence of the augmented feature vectors and ignores the spatial correlations in the parametrised speech signal. This is the short coming while applying HMM for acoustic modeling for ASR. Rather than modelling individual frames of data, LDMs characterize entire segments of speech. An auto-regressive state evolution through a continuous space gives a Markovian model. The underlying dynamics, and spatial correlations between feature dimensions. LDMs are well suited to modelling smoothly varying, continuous, yet noisy trajectories such as found in measured articulatory data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.