Abstract

In the paper we consider the ranking problem that is popular in the machine learning community. The goal is to predict or to guess the ordering between objects on the basis of their observed features. We focus on ranking estimators that are obtained by minimization of an empirical risk with a convex loss function. We pay special attention to “large” families of ranking rules that algorithms work with, i.e. the exponent in the entropy condition can be larger than one. In these cases we prove generalization bounds with “fast rates” for the excess risk of ranking estimators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call