Abstract
Multi-modal Canonical Correlation Analysis (MCCA) is an important information fusion method, and some discriminant variations of MCCA have been proposed. However, the variations suffer from the Small Sample Size (SSS) problem and the absence of cross-modal discriminant scatters. Thus we propose a novel exponential multi-modal discriminant feature fusion method for a small amount of training samples, i.e. exponential multi-modal discriminant correlation analysis. In the method, we construct a discriminative integration scatter of all the modalities by constraining the aggregation towards cross-modal discriminative centroids. Besides, the method gives a decomposition-based matrix exponential strategy. The strategy can solve the SSS problem and improve the robustness of noises, and we further provide corresponding theoretical proofs and some intuitive analysis. The method can learn correlation fusion features with well discriminative power from a small amount of samples. Encouraging experimental results show the effectiveness and robustness of our method.
Highlights
Data collected in real-world applications is divided into single-modal and multi-modal data depending on the number of data representations corresponding to the same target [1]
modal canonical correlation analysis (MCCA) belongs to a multi-modal feature fusion method that aims at finding a correlation projection direction φ(p) ∈ Rdp×1 correlating to Z(p) (p = 1,2, ... , M)
EXPERIMENTAL RESULTS AND ANALYSIS we compare our method with six representative multi-modal feature fusion methods including graph multiview canonical correlation analysis (GMCCA) [38], labeled multiple canonical correlation analysis (LMCCA) [39], discriminative multiple canonical correlation analysis (DMCCA) [21], multi-view discriminant analysis (MvDA) [13], Laplacian multi-set canonical correlation analysis (LapMCCA) [23], and graph regularized multiset canonical correlations (GrMCC) [22]
Summary
Data collected in real-world applications is divided into single-modal and multi-modal data depending on the number of data representations corresponding to the same target [1]. In all multi-modal feature fusion methods, canonical correlation analysis (CCA) [6] plays an important role. To embed class labels into correlation analysis theories, generalized CCA [12] optimizes the within-modal discriminative information when the acrossmodal correlation is maximized. Multi-modal correlation theories with supervised information are important and hot research subjects in multimodal feature fusion. The singularity of matrices is usually fatal to the optimization of feature fusion methods To solve these issues, we propose a novel exponential multi-modal discriminant correlation analysis (EMDCA) method. By minimizing the exponential discriminative integration scatter and simultaneously maximizing exponential between-modal correlations, the method can obtain correlation fusion features with well discriminative power from a small number of raw data. MCCA belongs to a multi-modal feature fusion method that aims at finding a correlation projection direction φ(p) ∈ Rdp×1 correlating to Z(p) MCCA can be referred to as maximal between-modal correlations and minimal within-modal global scatters from various perspectives
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.