Abstract
Stochastic gradient descent (SGD) and Alternating least squares (ALS) are two popular algorithms applied on matrix factorization. Moreover recent researches pay attention to how to parallelize them on daily increading data. About large-scale datasets issue, however, SGD still suffers with low convergence by depending on the parameters. While ALS is not scalable due to the cubic complexity with the target time rank. The remaining issue, how to operate system, almost parallel algorithms conduct matrix factorization on a batch of training data while the system data is real-time. In this work, the authors proposed FSGD algorithm overcomes drawbacks in large-scale issue base on coordinate descent, a novel optimization approach. According to that, algorithm updates rank-one factors one by one to get faster and more stable convergence than SGD and ALS. In addition, FSGD is feasible to paralleize and operates on a stream of incoming data. The experimental results show that FSGD performs much better in solving the matrix factorization issue compared to existing state-of-the-art parallel models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.