Abstract
The existing methods of learning to rank often ignore the relationship between ranking features. If the relationship between them can be fully utilized, the performance of learning to rank methods can be improved. Aiming at this problem, an approach of learning to rank that combines a multi-head self-attention mechanism with Conditional Generative Adversarial Nets (CGAN) is proposed in this paper, named *GAN-LTR. The proposed approach improves some design ideas of Information Retrieval Generative Adversarial Networks (IRGAN) framework applied to web search, and a new network model is constructed by integrating convolution layer, multi-head self-attention layer, residual layer, fully connected layer, batch normalization, and dropout technologies into the generator and discriminator of Conditional Generative Adversarial Nets (CGAN). The convolutional neural network is used to extract the ranking feature representation of the hidden layer and capture the internal correlation and interactive information between features. The multi-head self-attention mechanism is used to fuse feature information in multiple vector subspaces and capture the attention weight of features, so as to assign appropriate weights to different features. The experimental results on the MQ2008-semi learning to rank dataset show that compared with IRGAN, our proposed learning to rank method *GAN-LTR has certain performance advantages in various performance indicators on the whole.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.