Abstract

Most of the existing methods for search result diversification (SRD) appeal to the greedy strategy for generating diversified results, which is formulated as a sequential process of selecting documents one-by-one, and the locally optimal choice is made at each round. Unfortunately, this strategy suffers from the following shortcomings: (1) Such a one-by-one selection process is rather time-consuming for both training and inference. (2) It works well on the premise that the preceding choices are optimal or close to the optimal solution. (3) The mismatch between the objective function used in training and the final evaluation measure used in testing has not been taken into account. We propose a novel framework through direct metric optimization for SRD (referred to as MO4SRD) based on the score-and-sort strategy. Specifically, we represent the diversity score of each document that determines its rank position based on a probability distribution. These distributions over scores naturally give rise to expectations over rank positions. Armed with this advantage, we can get the differentiable variants of the widely used diversity metrics. Thanks to this, we are able to directly optimize the evaluation measure used in testing. Moreover, we have devised a novel probabilistic neural scoring function. It jointly scores candidate documents by taking into account both cross-document interaction and permutation equivariance, which makes it possible to generate a diversified ranking via a simple sorting. The experimental results on benchmark collections show that the proposed method achieves significantly improved performance over the state-of-the-art results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call