Abstract

<abstract><p>Cross-view data correlation analysis is a typical learning paradigm in machine learning and pattern recognition. To associate data from different views, many approaches to correlation learning have been proposed, among which canonical correlation analysis (CCA) is a representative. When data is associated with label information, CCA can be extended to a supervised version by embedding the supervision information. Although most variants of CCA have achieved good performance, nearly all of their objective functions are nonconvex, implying that their optimal solutions are difficult to obtain. More seriously, the discriminative scatters and manifold structures are not exploited simultaneously. To overcome these shortcomings, in this paper we construct a Discriminative Correlation Learning with Manifold Preservation, DCLMP for short, in which, in addition to the within-view supervision information, discriminative knowledge as well as spatial structural information are exploited to benefit subsequent decision making. To pursue a closed-form solution, we remodel the objective of DCLMP from the Euclidean space to a geodesic space and obtain a convex formulation of DCLMP (C-DCLMP). Finally, we have comprehensively evaluated the proposed methods and demonstrated their superiority on both toy and real datasets.</p></abstract>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call