Abstract
Recently, multi-view feature learning technique has attracted lots of research interest. Discriminant analysis-based multi-view feature learning is an important research branch. Although some multi-view discriminant analysis methods have been presented, there still exists room for improvement. How to effectively explore the discriminant and local geometrical structure information simultaneously from multiple views is still an important research topic. In this paper, we propose a novel approach named uncorrelated locality-sensitive multi-view discriminant analysis, which jointly learns multiple view-specific transformations, such that in the projected subspace for each view, the within-class nearby samples are close to each other, while the between-class nearby samples are far apart. We provide a multi-view sample distance term to promote the one-to-one data consistency across views. Furthermore, we design uncorrelated constraints to reduce the redundancy among the transformations. Experiments on two widely used datasets demonstrate the effectiveness of the proposed approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.