Abstract
Recently, the low-rank plus diagonal (LRPD) adaptation was proposed for speaker adaptation of deep neural network (DNN) models. The LRPD restructures the adaptation matrix as a superposition of a diagonal matrix and a product of two low-rank matrices. In this paper, we extend the LRPD adaptation into the subspace-based approach to further reduce the speaker-dependent (SD) footprint. We apply the extended LRPD (eLRPD) adaptation for the DNN and LSTM models with emphasis placed on the applicability of the adaptation to large-scale speech recognition systems. To speed up the adaptation in test time, we propose the bottleneck (BN) caching approach to eliminate the redundant computations during multiple sweeps of development data. Experimental results on the short message dictation (SMD) task show that the eLRPD adaptation can reduce the SD footprints by 82% for the SVD DNN and 96% for the LSTM-RNN over the linear adaptation, while maintaining the comparable accuracy. The BN caching achieves up to 3.5 times speedup in adaptation at no loss of recognition accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.