Abstract

We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call