Abstract

Since a large proportion of real-world data is made of different representations or views, learning on data represented with multiple views (e.g., numerous types of features or modalities) has garnered considerable attention recently. Nonnegative matrix factorization (NMF) has been widely adopted for multi-view learning due to its great interpretability. We focus on unsupervised multi-view data representation in this paper and propose a novel framework termed Deep Autoencoder-like NMF (DANMF-MDR), which learns an intact representation by simultaneously exploring multi-view complementary and consistent information. Furthermore, an efficient iterative optimization algorithm is developed to solve the proposed model. Experimental results on three real-world multi-view datasets demonstrate that ours performs better than the SOTA multi-view NMF-based MDR approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call