Abstract

In real business platforms, recommendation systems usually need to predict the CTR of multiple business. Since different scenarios may have common feature interactions, knowledge transferring based methods are often used by re-optimizing the pre-trained CTR model from source scenarios to a new target domain. In addition to knowledge transfer, it is noteworthy that generalizing target domain data outside of the CTR model accurately is also important when re-training all of the fine-tuned parameters. Generally, the pre-trained model trained on large source domains can represent the characteristics of different instances and capture typical feature interactions. It would be useful to directly reuse fine-tuned parameters from source domains to serve the target domain. However, different instances of the target domain may need different amounts of source information to fine-tune the model parameters, and these decisions of freezing or re-optimizing model parameters, which highly depend on the fine-tuned model and target instances, may require much manual effort. In this paper, we propose an end-to-end transfer learning framework with fine-tuned parameters for CTR prediction, called Automatic Fine-Tuning (AutoFT). The principal component of AutoFT is a set of learnable transfer policies that independently determine how the specific instance-based fine-tuning policies should be trained, which decide the routing in the embedding representations and the high-order feature representations layer by layer in deep CTR model. Extensive tests on two benchmarks and one real commercial recommender system deployed in Huawei's App Store show that AutoFT can greatly increase CTR prediction performance when compared to current transferring methodologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call