Abstract
Learning to rank (LTR) is a method of ranking search results using machine learning techniques. Currently, the reinforcement-learning-based ranking models have achieved some success in LTR task. However, these models have disadvantages like high variance gradient estimates and train inefficiency, which bring great challenges to the convergence and accuracy of the ranking model. Combining short- and long-term returns, this paper proposes AMRank, an adversarial Markov ranking model, which is based on reinforcement learning and formalizes the ranking task as a Markov decision process. To address the aforementioned weaknesses, in AMRank, we present a sequence discriminator to output a long-term return with a smaller variance and conduct single step updates, and use a document discriminator to yield a short-term return. The two discriminators are trained simultaneously before the decision is made. In the training process, the policy network is applied as a generator to sample candidate documents and get negative samples. At the beginning of the decision, the discriminator outputs the returns based on the environment state and the policy, and finally updates the parameters of the policy network using the policy gradient method. Experimental results on three LETOR benchmark datasets, OHUSMED, MQ2007 and MQ2008, demonstrate that the proposed AMRank outperforms the baseline models in document ranking task.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.