Abstract
• We first provide the deeper match framework of multi-view fusion representation from the kernel perspective, which has larger flexibility and shows great potential to integrate complementary information among different views. • We adopt local deep view-specific self-kernel (LDSvK) by mimicking the deep neural networks to faithfully characterize the local similarity between view-specific samples. Thus, the representation capacity of each view can be saliently analyzed. • We build the global deep multi-view fusion kernel (GDMvK) by learning deep combinations of local deep view-specific self-kernel to learn a comprehensive measurement of the cross-view similarity, which applies the strategy of deep learning to enhance representation performance of MKL. Thus, the complementary property among multiple views is fully exploited. • Experiments on several real-world multi-view datasets validate the effectiveness of our method for classification. Multi-view representation learning aims to exploit the complementary information underlying multiple view data to enhance the expressive power of data representation. Given that kernels in multiple kernel learning naturally correspond to different views, previous shallow similarity learning models cannot fully capture the complex hierarchical information. This work presents an effective deeper match model for multi-view oriented kernel (DMMV) learning which brings a deeper insight into the kernel match for similarity based multi-view representation fusion. Specifically, we propose local deep view-specific self-kernel (LDSvK) by mimicking the deep neural networks to faithfully characterize the local similarity between view-specific samples. Thus, the representation capacity of each view can be saliently analyzed. We build the global deep multi-view fusion kernel (GDMvK) by learning deep fusion of LDSvKs to learn a comprehensive measurement of the cross-view similarity. Notably, the proposed learning framework of the deeper local information extraction and global deep multiple kernel fusion provides a robust way in fitting multi-view data, and yields better learning performance. Experimental results on several multi-view benchmark datasets well demonstrate the effectiveness of our DMMV over other state-of-the-art methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have