Abstract

Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

Highlights

  • Learning to rank is an important research area in machine learning. It has attracted the interests of many researchers because of its growing application in areas like information retrieval systems [1], recommender systems [2, 3], machine translation, and computational biology [4]

  • Ranking support vector machine (RankSVM) [5], which is extended from the basic support vector machine (SVM) [6], is one of the commonly used methods

  • We propose a fast RankSVM algorithm with kernel approximation to solve the problem of lengthy training time of kernel RankSVM

Read more

Summary

Introduction

Learning to rank is an important research area in machine learning. It has attracted the interests of many researchers because of its growing application in areas like information retrieval systems [1], recommender systems [2, 3], machine translation, and computational biology [4]. Newton method algorithm to solve the linear RankSVMstruct problem without the need of explicit pairwise transformation. The random Fourier features method approximates the shift-invariant kernel based on Fourier transformation of nonnegative measure [15]. We use the kernel approximation method to solve the problem of lengthy training time of kernel RankSVM. To the best of our knowledge, this is the first work using the kernel approximation method to solve the learning to rank problem. We use two types of approximation methods, namely, the Nystrom method or random Fourier features, to map the features into high-dimensional space. Primal truncated Newton method is used to optimize pairwise L2-loss (squared Hinge-loss) function of the RankSVM model. Experimental results demonstrate that our proposed method can achieve high performance and fast training speed than the kernel RankSVM. Matlab code for our algorithm is available online (https://github.com/KaenChan/rank-kernel-appr)

Background and Related Works
RankSVM with Kernel Approximation
Kernel Approximation
Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call