Abstract
We have previously proposed unsupervised cross-validation (CV) adaptation that introduces CV into an iterative unsupervised batch mode adaptation framework to suppress the influence of errors in an internally generated recognition hypothesis and have shown that it improves recognition performance. However, a limitation was that the experiments were performed using only a clean speech recognition task with a ML trained initial acoustic model. Another limitation was that only the CV method was investigated while there was a possibility of using other ensemble methods. In this study, we evaluate the CV method using a discriminatively trained baseline and a noisy speech recognition task. As an alternative to CV adaptation, unsupervised aggregated (Ag) adaptation is proposed and investigated that introduces a bagging like idea instead of CV. Experimental results show that CV and Ag adaptations consistently give larger improvements than the conventional batch adaptation but the former is more advantageous in terms of computational cost.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.