Abstract

An invariant feature is a nonlinear projection whose output shows less intra-class variability than its input. In machine learning, invariant features may be given a priori, on the basis of scientific knowledge, or they may be learned using feature selection algorithms. In the task of acoustic feature extraction for automatic speech recognition, for example, a candidate for apriori invariance is provided by the theory of phonological distinctive features, which specifies that any given distinctive feature should correspond to a fixed acoustic correlate (a fixed classification boundary between positive and negative examples), regardless of context. A learned invariance might, instead, project each phoneme into a high-dimensional Gaussian mixture supervector space, and in the high-dimensional space, learn an inter-phoneme distance metric that minimizes the distances among examples of any given phoneme. Results are available for both tasks, but it is not easy to compare them: learned invariance outperforms a priori invariance for some task definitions, and underperforms for other task definitions. As future work, we propose that the a priori invariance might be used to regularize a learned invariance projection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.