Abstract

A high-dimensional and incomplete (HDI) matrix is a common form of big data in most industrial applications. Stochastic gradient descent (SGD) algorithm optimized latent factor analysis (LFA) model is often adopted in learning the abundant knowledge in HDI matrix. Despite its computational tractability and scalability, when solving a bilinear problem such as LFA, the regular SGD algorithm tends to be stuck in a local optimum. To address this issue, the paper innovatively proposes an Adjusted Stochastic Gradient Descent (ASGD) for Latent Factor Analysis, where the adjustment mechanism is implemented by considering the bi-polar gradient directions during optimization, such mechanism is theoretically proved for its efficiency in overstepping local saddle points and avoiding premature convergence. Also, the hyper-parameters of the model are implemented in a self-adaptive manner using the particle swarm optimization (PSO) algorithm, for higher practicality. Experimental results show that the proposed model outperforms other state-of-the-art approaches on six different HDI matrices from industrial applications, especially in prediction accuracy for missing data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call