Abstract

A speaker adaptation method based on the low rank approximation of matrices (GLRAM) of training models is described. In the method, each model is represented as a matrix, and a set of such training matrices is decomposed into a set of speaker weights and two basis matrices for row and column spaces by reducing both row and column ranks of the training models. As a result, the speaker weight becomes a matrix, the row and column dimensions of which can be adjusted. In the isolated-word experiment, the proposed method showed better performance than both eigenvoice and MLLR for the adaptation data of about 20 s or longer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call