Abstract

We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.

Highlights

  • The first motivation behind this study comes from works like [12,19,22,23] where proximal point type methods for minimizing quasiconvex functions formulated by means of Bregman distances were proposed

  • Looking for a way to reconcile these approaches we came across a new class of generalized convex functions that we called proxconvex, whose properties allowed us to extend the convergence of the classical proximal point algorithm beyond the convexity setting into a yet unexplored direction

  • (iv) At least due to the similar name, a legitimate question is whether the notion of proxconvexity is connected in any way with the prox-regularity. While the latter asks a function to be locally lower semicontinuous around a given point, the notion we introduce in this work does not assume any topological properties on the involved function. Another difference with respect to this notion can be noticed in Sect. 4, where we show that the classical proximal point algorithm remains convergent towards a minimum of the function to be minimized even if this lacks convexity, but is prox-convex

Read more

Summary

Introduction

The first motivation behind this study comes from works like [12,19,22,23] where proximal point type methods for minimizing quasiconvex functions formulated by means of Bregman distances were proposed. To the best of our knowledge besides the convex and prox-convex functions only the weakly convex ones have single-valued proximity operators (cf [16]) Where ∂h is the usual convex subdifferential These two facts (the existence of an optimal solution to (1.1) and the characterization (1.2)) are crucial tools for proving the convergence of the proximal point type algorithms for continuous optimization problems consisting in minimizing (sums of) proper, lower semicontinuous and convex functions, and even for DC programming problems (see [4] for instance). After some preliminaries, where we define the framework and recall some necessary notions and results, we introduce and investigate the new classes of prox-convex functions and strongly G-subdifferentiable functions, showing that the proper and lower semicontinuous elements of the latter belong to the first one, too. We show that the classical proximal point algorithm can be extended to the prox-convex setting without losing the convergence

Preliminaries
Prox-convex functions
Strongly G-subdifferentiable functions
Proximal point type algorithms for nonconvex problems
Conclusions and future work

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.