Abstract

Partial Label Learning (PLL) learns from the training data where each example is associated with a set of candidate labels, among which only one is valid. Most existing methods deal with such problem by disambiguating the candidate labels first and then inducing the predictive model from the disambiguated data. However, these methods only focus on disambiguation for each single candidate label set, while the global label context tends to be ignored. Meanwhile, these methods induce the model by directly utilizing the original feature information, which may lead the model overfitting due to high-dimensional redundant feature. To tackle the above issues, we propose a novel feature \({\varvec{S}}ubspac{\varvec{E}} {\varvec{R}}{epresentation}\) and label \({\varvec{G}}{lobal Disambiguat}{\varvec{IO}}{n}\) (SERGIO) PLL approach, which improves the generalization ability of learning system from the perspective of both feature space and label space. Specifically, we project the original high-dimensional feature space into a low-dimensional subspace, where the projection matrix is regularized with an orthogonality constraint to make the subspace more compact. Meanwhile, we introduce a label confidence matrix and constrain it with \(\mathcal {\mathbf {\ell _{1}}}\)-norm regularization, where such constraint can be well in accordance with the nature of PLL problem and explore more global partial label correlations. Extensive experiments on various data sets demonstrate that our proposed method achieves competitive performance against state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call