Abstract

In pattern recognition field, objects are usually represented by multiple features (multimodal features). For example, to characterize a natural scene image, it is essential to extract a set of visual features representing its color, texture, and shape information. However, integrating multimodal features for recognition is challenging because: (1) each feature has its specific statistical property and physical interpretation, (2) huge number of features may result in the curse of dimensionality (When data dimension is high, the distances between pairwise objects in the feature space become increasingly similar due to the central limit theory. This phenomenon influences negatively to the recognition performance), and (3) some features may be unavailable. To solve these problems, a new multimodal feature selection algorithm, termed Grassmann manifold feature selection (GMFS), is proposed. In particular, by defining a clustering criterion, the multimodal features are transformed into a matrix, and further treated as a point on the Grassmann manifold in Hamm and Lee (Grassmann discriminant analysis: a unifying view on subspace-based learning. In: Proceedings of the 25th international conference on machine learning (ICML), pp. 376---383, Helsinki, Finland [2008]). To deal with the unavailable features, L2-Hausdorff distance, a metric between different-sized matrices, is computed and the kernel is obtained accordingly. Based on the kernel, we propose supervised/unsupervised feature selection algorithms to achieve a physically meaningful embedding of the multimodal features. Experimental results on eight data sets validate the effectiveness the proposed approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.