Abstract

A latent factor analysis (LFA) model can efficiently address a high-dimensional and sparse (HiDS) matrix with a stochastic gradient descent (SGD) algorithm. However, an SGD-based LFA model's performance depends heavily on its hyper-parameters. The popular method is based on grid-search, which fails because of expensive computation and time-consuming. Aiming at implementing a hyper-parameter-free LFA model, this study proposes an adaptive moment estimation-incorporated particle swarm optimization (Adam-PSO) algorithm that efficiently addresses the premature issues in a PSO algorithm. With achieved the hyper-parameter adaptation in an SGD-based LFA model, an Adam-PSO-based LFA (APL) model possessing hyper-parameter-free training is further implemented. Empirical studies on four HiDS matrices indicate that compared with state-of-the-art models with hyper-parameter adaptation settings, an APL model achieves the most efficient hyper-parameter-free training and highly competitive prediction accuracy for missing data of an HiDS matrix. Hence, it fits the need of real applications with high scalability and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call