Abstract

This paper describes an efficient method of attaining the highest level of recognition performance yet achieved in minimum classification error (MCE) training for a small amount of data. This method combines MCE and vector-field-smoothed Bayesian learning called MAP/VFS. In the proposed method, the training capability of MCE in robust acoustic modeling can be significantly enhanced with MAP/VFS. In the method, MCE training is performed using an initial model trained through MAP/VFS. The same data are used in both training. For speaker adaptation using 50-word training data, the error reduction rate drastically rises to 47% compared with 16.5% when using only MCE. This high rate, in which 39% is due to MAP, an additional 4% is due to VFS, and a further improvement of 4% is due to MCE, can be attained by enhancing MCE training capability with MAP/VFS.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.