Abstract

The INRS large-vocabulary continuous-speech recognition system employs a two-pass search. First, inexpensive models prune the search space; then a powerful language model and detailed acoustic–phonetic models scrutinize the data. A new fast match with two-phone lookahead and pruning speeds up the search. In language modeling, excluding low-count statistics reduces memory (50% fewer bigrams and 92% fewer trigrams); with Wall Street Journal texts, excluding single-occurrence bigrams and trigrams with counts less than five yields little performance decrease. In acoustic modeling, separate male and female right-context VQ models and a bigram language model are used in the first pass, and right-context continuous models and a trigram language model are used in the second pass. A shared-distribution clustering uses a distortion measure based only on the weights of Gaussian mixtures in the HMM model. Testing the system with a 5000-word vocabulary, the word inclusion rate (i.e., correct word retained in the first pass) is about 99%; word recognition accuracy is about 92.5%. Keyword spotting with new types of fillers retains accuracy with 1.2 false alarms/hour/keyword. [Work supported by NSERC-Canada and FCAR-Quebec.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.