Abstract

We study a generalization of the nonderivative discrete gradient method of Bagirov et al. for minimizing a locally Lipschitz function f on ℝn. We strengthen the existing convergence result for this method by showing that it either drives the f-values to −∞ or each of its cluster points is Clarke stationary for f, without requiring the compactness of the level sets of f. Our generalization is an approximate bundle method, which also subsumes the secant method of Bagirov et al.

Highlights

  • We consider the recentiy proposed discrere gradiellf (DG) method

  • In contras! with bundle methods (see, e.g., (11, 12] and the references in (3,5, 7, 14]) which require the computation of a single subgradient off at each trial point, the DG method approximates subgrndients by discrete gradients using f-vaiues only

  • This is important for applications where subgradients are unavailable and derivative free methods are employed; see, e.g., [I, 2] and the references therein

Read more

Summary

Introduction

We consider the recentiy proposed discrere gradiellf (DG) method With bundle methods (see, e.g., (11, 12] and the references in (3,5, 7, 14]) which require the computation of a single subgradient off at each trial point, the DG method approximates subgrndients by discrete gradients using f-vaiues only. We prove thai this bundle method either drives the f-values to - ~, or each of its cluster points is Clarke [8] stationary for f (see Thm. 3.1). This is significantiy stronger than the result of

A bundle method with approximate subgradlents
Convergence analysis
Extensions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call