Abstract

In this paper, we describe an adaptation method for speech recognition systems that is based on a nonlinear transformation of the feature space. In contrast to most existing adaptation methods which assume some form of affine transformation of either the feature vectors or the acoustic models that model the feature vectors, our proposed method composes a general nonlinear transformation from two transformations, one of these being an affine transformation that combines the dimensions of the original feature space, and the other being a nonlinear transformation that is applied independently to each dimension of the previously transformed feature space leading to a general multidimensional nonlinear transformation of the original feature space. This method also differs from other affine techniques in the way the parameters of the transform are shared. In most previous methods, the parameters of the transformation are shared on the basis of the phonetic class, in our method, the parameters of the nonlinear transformation are shared not on the basis of the phonetic class, but rather on the location in the feature space. Experimental results show that the method outperforms affine methods providing up to a 25% relative improvement in the word error rate in an in-car speech recognition task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.