Abstract

Multi-view representation learning aims to exploit the complementary information underlying multiple view data to enhance the expressive power of data representation. Given that kernels in multiple kernel learning naturally correspond to different views, previous shallow similarity learning models cannot fully capture the complex hierarchical information. This work presents an effective deeper match model for multi-view oriented kernel (DMMV) learning which brings a deeper insight into the kernel match for similarity based multi-view representation fusion. Specifically, we propose local deep view-specific self-kernel (LDSvK) by mimicking the deep neural networks to faithfully characterize the local similarity between view-specific samples. Thus, the representation capacity of each view can be saliently analyzed. We build the global deep multi-view fusion kernel (GDMvK) by learning deep fusion of LDSvKs to learn a comprehensive measurement of the cross-view similarity. Notably, the proposed learning framework of the deeper local information extraction and global deep multiple kernel fusion provides a robust way in fitting multi-view data, and yields better learning performance. Experimental results on several multi-view benchmark datasets well demonstrate the effectiveness of our DMMV over other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.