Abstract

With the increasing availability of multiview nonnegative data in real applications, multiview representation learning based on nonnegative matrix factorization (NMF) has attracted more and more attentions. However, existing NMF-based methods are sensitive to noises and are difficult to generate discriminative features with noisy views. To address these problems, we propose a co-regularized multiview nonnegative matrix factorization method with correlation constraint for nonnegative representation learning, which jointly exploits consistent and complementary information across different views. Different from previous works, we aim at integrating information from multiple views efficiently and making it more robust to the presence of noisy views. More specifically, we exploit the complementary information of multiple views through the co-regularization to accommodate the presence of the noisy views. Meanwhile, correlation constraint is imposed on the low-dimensional space to learn a common latent representation shared by different views. For the induced objective function, we derive an alternative algorithm to solve the optimization problem. The experimental results on four real datasets demonstrate the effectiveness and robustness of the proposed algorithm.

Highlights

  • In real application, each object can be described by multiple different views or different features [30]

  • We propose a co-regularized nonnegative matrix factorization method with correlation constraint for robust multi-view feature learning, which provides an explicit latent representation via capturing complementary and consistent information across different views

  • We proposed a co-regularized multiview nonnegative matrix factorization method with correlation constraint for nonnegative representation learning

Read more

Summary

Introduction

Each object can be described by multiple different views or different features [30]. As shown, the objects can be represented by texture, color, shape, text and speech. These multiview representations provide complementary information to each other [47]. The traditional method is to concatenate all the features into a single vector, and applies existing algorithms to this single vector. This method ignores the differences of statistical properties between different views and lacks physical meaning [39]. Leveraging the complementary information amongst views has better generalization ability than single view [7, 23, 25, 30, 46]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call