Abstract

In recent years, the stochastic variance reduced gradient (SVRG) method has been used for singular vector decomposition (SVD), called VR-SVD, and VR-SVD only uses simple variance reduction iterations, yet can obtain an exponential convergence rate to the optimal solution. Considering that recent years have witnessed great application prospects in large-scale SVD, this paper proposes two efficient asynchronous parallel stochastic variance reduction algorithms. Moreover we analyze their convergence properties in theory, and both dense and sparse asynchronous parallel variants can converge with the rate of O(1/T). Finally, we extend our algorithms for the SVD problems with multiple singular vectors. Our extensive experimental results show that our algorithms attain ideal parallel speedup ratios and achieve the same PSNR values in a shorter time, and thus they can be widely used in practice.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call