Abstract

Label ambiguity has attracted quite some attention among the machine learning community. The latterly proposed Label Distribution Learning (LDL) can handle label ambiguity and has found wide applications in real classification problems. In the training phase, an LDL model is learned first. In the test phase, the top label(s) in the label distribution predicted by the learned LDL model is (are) then regarded as the predicted label(s). That is, LDL considers the whole label distribution in the training phase, but only the top label(s) in the test phase, which likely leads to objective inconsistency. To avoid such inconsistency, we propose a new LDL method Re-Weighting Large Margin Label Distribution Learning (RWLM-LDL). First, we prove that the expected L1-norm loss of LDL bounds the classification error probability, and thus apply L1-norm loss as the learning metric. Second, re-weighting schemes are put forward to alleviate the inconsistency. Third, large margin is introduced to further solve the inconsistency. The theoretical results are presented to showcase the generalization and discrimination of RWLM-LDL. Finally, experimental results show the statistically superior performance of RWLM-LDL against other comparing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.