Abstract

Though the alternating least squares algorithm (ALS), as a classic and easily implemented algorithm, has been widely applied to tensor decomposition and approximation problems, it has some drawbacks: the convergence of ALS is not guaranteed, and the swamp phenomenon appears in some cases, causing the convergence rate to slow down dramatically. To overcome these shortcomings, the regularized-ALS algorithm (RALS) was proposed in the literature. By employing the optimal step-size selection rule, we develop a self-adaptive regularized alternating least squares method (SA-RALS) to accelerate RALS in this paper. Theoretically, we show that the step-size is always larger than unity, and can be larger than [Formula: see text], which is quite different from several optimization algorithms. Furthermore, under mild assumptions, we prove that the whole sequence generated by SA-RALS converges to a stationary point of the objective function. Numerical results verify that the SA-RALS performs better than RALS in terms of the number of iterations and the CPU time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call