Abstract

Recently, mutual interdependence analysis (MIA) has been successfully used to extract representations, or ''mutual features'', accounting for samples in the class. For example, a mutual feature is a face signature under varying illumination conditions or a speaker signature under varying channel conditions. A mutual feature is a linear regression that is equally correlated with all samples of the input class. Previous work discussed two equivalent definitions of this problem and a generalization of its solution called generalized MIA (GMIA). Moreover, it showed how mutual features can be computed and employed. This paper uses a parametrized version GMIA(@l) to pursue a deeper understanding of what GMIA features really represent. It defines a generative signal model that is used to interpret GMIA(@l) and visualize its difference to MIA, principal and independent component analysis. Finally, we analyze the effect of @l on the feature extraction performance of GMIA(@l) in two standard pattern recognition problems: illumination-independent face recognition and text-independent speaker verification.

Highlights

  • Statistical pattern recognition methods such as Fisher’s linear discriminant analysis (FLDA) [9], canonical correlation analysis (CCA) [16] or ridge regression [25] aim to model or extract the essence of a dataset

  • Results indicate that the Generalized mutual interdependence analysis (GMIA)(0) feature is more robust to variations in illumination than the one using GMIAðlÞ while their discrimination power to other classes appears comparable

  • Thereafter, we analyze how l and the data segmentation affect the result of a GMIA-based text-independent speaker verification system

Read more

Summary

Introduction

Statistical pattern recognition methods such as Fisher’s linear discriminant analysis (FLDA) [9], canonical correlation analysis (CCA) [16] or ridge regression [25] aim to model or extract the essence of a dataset. Pattern recognition problems implicitly assume that the number of observations is usually much higher than the dimensionality of each observation This allows one to study characteristics of the distributional observations and design proper discriminant functions for classification. FLDA is used to reduce the dimensionality of a dataset by projecting data points on a space that maximizes the ratio of the between- and within-class scatter of the training data. In this way, FLDA aims to find a simplified data representation that retains the discriminant characteristics for classification.

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.