Abstract

Smoothing methods have become part of the standard tool set for the study and solution of nondifferentiable and constrained optimization problems as well as a range of other variational and equilibrium problems. In this note we synthesize and extend recent results due to Beck and Teboulle on infimal convolution smoothing for convex functions with those of X. Chen on gradient consistency for nonconvex functions. We use epi-convergence techniques to define a notion of epi-smoothing that allows us to tap into the rich variational structure of the subdifferential calculus for nonsmooth, nonconvex, and nonfinite-valued functions. As an illustration of the versatility and range of epi-smoothing techniques, the results are applied to the general constrained optimization for which nonlinear programming is a special case.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.