Abstract

Multi-view learning has become a significant research topic in image processing, data mining and machine learning due to the proliferation of multi-view data. Considering the difficulty in obtaining labeled data in many real applications, we focus on the multi-view unsupervised feature selection problem. Most existing multi-view feature selection introduce an identical similarity matrix among different views, which cannot preserve the specific correlation between each single view. Also, some of these methods just consider either global or local structures. In this paper, we propose an embedding method, Adaptive Similarity Embedding for Unsupervised Multi-View Feature Selection (ASE-UMFS). This method reduces the high-dimensional data to the low dimensions and unifies different views to a combination weight matrix. We also use parameters to constraint the similarity matrix for the local structure, where the regularization term is used to add a prior of uniform distribution; taking into account of the independence in projection matrix among different views, optimization of the similarity matrix is further improved. To confirm the effectiveness of ASE-UMFS, comparisons are made with benchmark algorithm on real-world data sets. The experimental results demonstrate that the proposed algorithm outperforms several state-of-the-art methods in multi-view learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.