Abstract

Ever-growing volume of multi-modal data has garnered considerable research enthusiasm in many fields, such as biomedical image analysis and biometrics. It remains a challenge how to effectively fuse multi-modal data for improving the learning performance. Many data fusion methods have been proposed, such as multi-kernel learning (MKL) and canonical component analysis (CCA). Most of the previous fusion methods focus on concatenating the multi-modal data, which fail to preserve the structure information across different modalities. In addition, they treat the samples equally in fusion and ignore the difference of the contributions of samples to the fusion model. In this paper, we proposed a discriminative multi-modal dimensionality reduction method, which can seamlessly fuse the multi-modal data and explore the latent correlation among different modalities for robust representation learning. In optimization, self-paced learning is adopted to dynamically estimate the contribution of each sample to the overall fusion model. Following the easy-to-hard learning sequence, our model can adaptively and sequentially enhance the robustness of the learning system in a knowledge discovery manner. Extensive experimental results on multi-modal brain disease diagnosis and multi-spectral palmprint classification tasks demonstrate the proposed method outperforms the previous multi-modal classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call