Abstract

Though the alternating least squares algorithm (ALS), as a classic and easily implemented algorithm, has been widely applied to tensor decomposition and approximation problems, it has some drawbacks: the convergence of ALS is not guaranteed, and the swamp phenomenon appears in some cases, causing the convergence rate to slow down dramatically. To overcome these shortcomings, the regularized-ALS algorithm (RALS) was proposed in the literature. By employing the optimal step-size selection rule, we develop a self-adaptive regularized alternating least squares method (SA-RALS) to accelerate RALS in this paper. Theoretically, we show that the step-size is always larger than unity, and can be larger than [Formula: see text], which is quite different from several optimization algorithms. Furthermore, under mild assumptions, we prove that the whole sequence generated by SA-RALS converges to a stationary point of the objective function. Numerical results verify that the SA-RALS performs better than RALS in terms of the number of iterations and the CPU time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.