Abstract

Learning to rank (LTR) is an important artificial intelligence (AI) approach supporting the operation of many search engines. In large-scale search systems, the ranking results are continually improved with the introduction of more factors to be considered by LTR. However, the more factors being considered, the more computation resources required, which in turn, results in increased system response latency. Therefore, removing redundant factors can significantly improve search engine efficiency. In this paper, we report on our experience incorporating our Contextual Factor Selection (CFS) deep reinforcement learning approach into the Taobao e-commerce platform to optimize the selection of factors based on the context of each search query to simultaneously maintaining search result quality while significantly reducing latency. Online deployment on Taobao.com demonstrated that CFS is able to reduce average search latency under everyday use scenarios by more than 40% compared to the previous approach with comparable search result quality. Under peak usage during the Single’s Day Shopping Festival (November 11th) in 2017, CFS reduced the average search latency by 20% compared to the previous approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call