Abstract

Without the valuable label information to guide the learning process, it is demanding to fully excavate and integrate the underlying information from different views to learn the unified multi-view representation. This paper focuses on this challenge and presents a novel method, termed Graph-guided Unsupervised Multi-view Representation Learning (GUMRL), taking full advantage of multi-view graph information during the learning process. To be specific, GUMRL jointly conducts the view-specific feature representation learning, which is under the guidance of graph information, and the unified feature representation learning, which fuses the underlying graph information of different views to learn the desired unified multi-view feature representation. Regarding downstream tasks, such as clustering and classification, the classic single-view algorithms can be directly performed on the learned unified multi-view representation. The designed objective function is effectively optimized based on an alternating direction minimization method, and experiments conducted on six real-world multi-view datasets show the effectiveness and competitiveness of our GUMRL, compared to several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call