Abstract
We study two key issues in task independent training, namely selection of a universal set of subword units and modeling of the selected units. Since no a priori knowledge about the application vocabulary and syntax was used in the collection of the training corpus and the recognition task is frequently changing, the conventional strategy can no longer provide the best performance across many different tasks. We present an approach that uses the complete sets of right and left context dependent units as the basis phone sets. Training of these models is accomplished by a new training criterion that maximizes phone separation between competing models. The proposed phone selection and modeling approach was evaluated across different tasks in American English. Good recognition results were obtained for both context independent and context dependent phone models even for unseen tasks. The same strategy has also been applied to two other languages, Mandarin Chinese and Spanish, with similar success.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.