Abstract

We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.