Abstract

Orthogonality has been demonstrated to admit many desirable properties such as noise-tolerant, good for data visualization, and preserving distances. However, it is often incompatible with existing models and the resulting optimization problem is challenging even if compatible. To address these issues, we propose a trace ratio formulation for multi-view subspace learning to learn individual orthogonal projections for all views. The proposed formulation integrates the correlations within multiple views, supervised discriminant capacity, and distance preservation in a concise and compact way. It not only includes several existing models as special cases, but also inspires new models. Moreover, an efficient numerical method based on successive approximations via eigenvectors is presented to solve the associated optimization problem. The method is built upon an iterative Krylov subspace method which can easily scale up for high-dimensional datasets. Extensive experiments are conducted on various real-world datasets for multi-view discriminant analysis and multi-view multi-label classification. The experimental results demonstrate that the proposed models are consistently competitive to and often better than the compared methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call