Abstract

Methods from the field of optimization theory have played an important role in developing training algorithms for matrix factorization in recommender systems. Indeed, the realization that simple stochastic unconstrained gradient descent can be applied with success to the factorization of the user-i tem matrix is responsible, to a great extent, for the recent research interest in this area, and the introduction of a plethora of matrix factorization methods. In this paper, motivated by earlier approaches in training neural networks, we introduce a constrained optimization framework for incorporating additional knowledge into the matrix factorization formalism, which can overcome certain drawbacks of the unconstrained minimization approach. We examine two types of such additional knowledge, and consequently derive two algorithms, as a result of incorporating the different knowledge types in the context of the constrained optimization framework. Both algorithms are designed to improve convergence and accuracy in the broader class of matrix factorization methods in recommender systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.