Abstract

Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics, and data analytics. In this paper, based on the augmented Lagrangian function, we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of [Formula: see text], where k is the number of iterations. Moreover, under an additional error bound condition (to be called HVP-EB in the paper) with exponent [Formula: see text], we further show the global convergence of NAPP-AL. Additionally, if [Formula: see text], then we furthermore show that the convergence rate is in fact linear. Finally, we show that the well-known Kurdyka-Łojasiewicz property and the Hölderian metric subregularity imply the aforementioned HVP-EB condition. We demonstrate that under mild conditions, NAPP-AL can also be interpreted as a variant of the forward-backward operator splitting method in this context. Funding: This work was supported by the National Natural Science Foundation of China [Grant 71871140].

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.