Abstract

Multi-view multi-label learning serves as a vital framework for learning from objects with diverse representations and rich semantics. The Gaussian process (GP), as an efficient and flexible Bayesian nonparametric model, the GP has effectively solved various tasks in machine learning. However, the Gaussian process has rarely been directly applied to multi-view multi-label learning scenarios. Therefore, in this paper, we propose a new multi-view multi-label learning method (mvml-DSimGP) based on the Gaussian pass latent variable model (GPLVM), which consists of similar non-parametric mapping function learning, rule constraint learning and label learning. Similarity nonparametric study mapping function for multiple-views data analysis, it studied the view within the similarity and latent mapping function between the parameters. On the basis of the similarity of nonparametric function learning, regular constraints are introduced, and put forward multi-label learning. Instances with similar label information are encouraged to have similar output representations in the embedded space, making it sufficient discernment in the embedded space. Numerous experiments on real-world datasets clearly show that mvml-DSimGP has good performance compared to other general multi-view multi-label learning methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.