Abstract

While most existing multilabel ranking methods assume the availability of a single objective label ranking for each instance in the training set, this paper deals with a more common case where only subjective inconsistent rankings from multiple rankers are associated with each instance. Two ranking methods are proposed from the perspective of instances and rankers, respectively. The first method, Instance-oriented Preference Distribution Learning (IPDL), is to learn a latent preference distribution for each instance. IPDL generates a common preference distribution that is most compatible to all the personal rankings, and then learns a mapping from the instances to the preference distributions. The second method, Ranker-oriented Preference Distribution Learning (RPDL), is proposed by leveraging interpersonal inconsistency among rankers, to learn a unified model from personal preference distribution models of all rankers. These two methods are applied to natural scene images dataset and 3D facial expression dataset BU_3DFE. Experimental results show that IPDL and RPDL can effectively incorporate the information given by the inconsistent rankers, and perform remarkably better than the compared state-of-the-art multilabel ranking algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.