Abstract

Recently, stochastic variance reduced gradient (SVRG) and related methods have made significant advancements in solving finite sum non-convex problems but limiting their applicability to convex constraints. In contrast, we study the same problem subjected to non-convex constraints and analyze stochastic variance reduced framework under mild assumptions. Presence of non-convex constraints complicates the problem and is challenging. For instance, preserving the feasibility of the iterates. To this end, we propose to fuse the stochastic variance reduced framework with successive convex approximation methods henceforth, solving a convex optimization problem at each iteration while maintaining iterate feasibility. Moreover, we prove that the proposed algorithm generates a convergent sequence under mild assumptions. Experimental evaluations are provided on node localization problem to demonstrate that it is indeed faster than the state-of-the-art non-convex optimization methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call