Abstract

Approximately sixty years ago two seminal findings, the cutting plane and the subgradient methods, radically changed the landscape of mathematical programming. They provided, for the first time, the practical chance to optimize real functions of several variables characterized by kinks, namely by discontinuities in their derivatives. Convex functions, for which a superb body of theoretical research was growing in parallel, naturally became the main application field of choice. The aim of the paper is to give a concise survey of the key ideas underlying successive development of the area, which took the name of numerical nonsmooth optimization. The focus will be, in particular, on the research mainstreams generated under the impulse of the two initial discoveries.

Highlights

  • Nonsmooth optimization (NSO), sometimes referred to as Nondifferentiable optimization (NDO), deals with problems where the objective function exhibits kinks

  • Some general ideas on how the Conceptual Bundle methods (BM) works in case the proximal approach is adopted

  • The approach is known as Proximal Level Bundle Method (PLBM) and the setting of θk is the key issue to address

Read more

Summary

Introduction

Nonsmooth optimization (NSO), sometimes referred to as Nondifferentiable optimization (NDO), deals with problems where the objective function exhibits kinks. A real breakthrough took place approximately in the mid seventies, when the idea of an iterative process based on information accumulation did materialize in the methods independently proposed by Lemaréchal (1974) and Wolfe (1975) From those seminal papers an incredibly large number of variants flourished, under the common label of bundle type methods. In more recent years, motivated by the interest in solving problems where exact calculation of the objective function is either impossible or computationally costly, several methods based on its approximate computation were devised. At this time the derivative free philosophy is successfully stepping in the nonsmooth optimization world. The paper is a slightly revised version of Gaudioso et al (2020c)

Preliminaries
Nonsmooth optimization mainstreams
Methods based on single-point models
Methods based on multi-point models
7: Adopt dk as a tentative displacement from the current stability center xk
Making BM implementable
Miscellaneous algorithms
Variable metric
Gradient sampling
Nonconvex NSO: a bundle view
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call