Abstract

The presence of sharp minima in nondifferentiable optimization models has been exploited, in the last decades, in the benefit of various subgradient or proximal methods. One of the long-lasting general proximal schemes of choice used to minimize nonsmooth functions is the Proximal Point Algorithm (PPA). Regarding the basic PPA, several well-known works proved finite convergence towards weak sharp minima, when supposedly each iteration is computed exactly. However, in this letter we show finite convergence of a common Inexact version of PPA (IPPA), under sufficiently low but persistent perturbations of the proximal operator. Moreover, when a simple Subgradient Method is recurrently called as an inner routine for computing each IPPA iterate, a suboptimal minimizer of the original problem lying at ϵ distance from the optimal set is obtained after a total Olog(1/ϵ) subgradient evaluations. Our preliminary numerical tests show improvements over existing restartation versions of Subgradient Method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.