Abstract

The presence of sharp minima in nondifferentiable optimization models has been exploited, in the last decades, in the benefit of various subgradient or proximal methods. One of the long-lasting general proximal schemes of choice used to minimize nonsmooth functions is the Proximal Point Algorithm (PPA). Regarding the basic PPA, several well-known works proved finite convergence towards weak sharp minima, when supposedly each iteration is computed exactly. However, in this letter we show finite convergence of a common Inexact version of PPA (IPPA), under sufficiently low but persistent perturbations of the proximal operator. Moreover, when a simple Subgradient Method is recurrently called as an inner routine for computing each IPPA iterate, a suboptimal minimizer of the original problem lying at ϵ distance from the optimal set is obtained after a total Olog(1/ϵ) subgradient evaluations. Our preliminary numerical tests show improvements over existing restartation versions of Subgradient Method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call