Abstract

Latent factor (LF)-based models have been proven to be efficient in implementing recommender systems, owing to their ability to well represent high-dimensional and sparse matrices. While prior works focus on boosting both the prediction accuracy and computation efficiency of original LF model by adding linear biases to it, the individual and combinational effects by linear biases in such performance gain remains unclear. To address this issue, this work thoroughly investigates the effect of prior linear biases and training linear biases. We have investigated the parameter update rules and training processes of an LF model with different combinations of linear biases. Empirical validations are conducted on a high dimensional and sparse matrix from industrial systems currently in use. The results show that each linear bias does have positive/negative effects in the performance of an LF model. Such effects are partially data dependent; however, some linear biases like the global average can bring stable performance gain into an LF model. The theoretical and empirical results along with analysis provide guidance in designing the bias scheme in an LF model for recommender systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call