Abstract

Recently, multi-view feature learning technique has attracted lots of research interest. Discriminant analysis-based multi-view feature learning is an important research branch. Although some multi-view discriminant analysis methods have been presented, there still exists room for improvement. How to effectively explore the discriminant and local geometrical structure information simultaneously from multiple views is still an important research topic. In this paper, we propose a novel approach named uncorrelated locality-sensitive multi-view discriminant analysis, which jointly learns multiple view-specific transformations, such that in the projected subspace for each view, the within-class nearby samples are close to each other, while the between-class nearby samples are far apart. We provide a multi-view sample distance term to promote the one-to-one data consistency across views. Furthermore, we design uncorrelated constraints to reduce the redundancy among the transformations. Experiments on two widely used datasets demonstrate the effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call