Abstract

A widespread idea to attack the ranking problem is by reducing it into a set of binary preferences and applying well studied classification methods. In particular, we consider this reduction for generic subset ranking, which is based on minimization of position-sensitive loss functions. The basic question addressed in this paper relates to whether an accurate classifier would transfer directly into a good ranker. We propose a consistent reduction framework guaranteeing that the minimal regret of zero for subset ranking is achievable by learning binary preferences assigned with importance weights. This fact allows us to further develop a novel upper bound on the subset ranking regret in terms of binary regrets. We show that their ratio can be at most 2 times the maximal deviation of discounts between adjacent positions. We also present a refined version of this bound when only the quality over the top rank positions is of concern. These bounds provide theoretical support on the use of the resulting binary classifiers for solving the subset ranking problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call