Abstract

In this paper, we study continuous approximations to the Clarke subdifferential and the Demyanov– Rubinov quasidifferential. Different methods have been proposed and discussed for the construction of the continuous approximations. Numerical methods for minimization of the locally Lipschitzian functions which are based on the continuous approximations are described and their convergence is studied. To test the proposed methods, numerical experiments have been carried out and discussed in the paper. This paper presents some methods for the continuous approximation of the Clarke subdifferential and the Demyanov–Rubinov quasidifferential of a locally Lipschitzian function. On the basis of the continuous approximations, we propose numerical methods of nonsmooth optimization. The paper presents a survey of some results obtained in [2–10]. The notions of the Clarke subdifferential and the Demyanov–Rubinov quasidifferential play a key role in nonsmooth and nonconvex optimization. The Clarke subdifferential is subject to a calculus in the form of inclusions and the calculus cannot be used in general for the estimation of the subgradients. Unlike the Clarke subdifferential, the quasidifferential has a full-scale calculus, which can be used for its calculation, although its entire calculation requires some operations with polytopes in the n-dimensional space. The number of these polytopes and their vertices can be very large and, therefore, the calculation of the quasidifferential becomes very complicated. In smooth optimization, there exist minimization methods, which, instead of using the gradient, use its approximations through finite differences (forward, backward, and central differences). In [37], a very simple convex nondifferentiable function was presented, for which these finite differences may give no information about the subdifferential. It follows that these finite-difference estimates of the gradient cannot be used for the approximation of the subgradient of the nonsmooth functions. In the past decades, a few methods for the numerical computation of the subgradients were proposed and studied. In [55], Shor proposed special finite differences, which allow one to approximate one subgradient of convex functions. In [56, 57], Studniarski modified Shor’s algorithm and extended it to the class of subregular functions. In [11], Borwein proposed a method for the calculation of one subgradient of the convex functions. Finally, in [46], Nesterov introduced the notion of the lexicographically smooth functions and proposed a method for the calculation of the subgradients. In this paper, we introduce the notion of a discrete gradient as a certain version of finite-difference subgradient estimates. The discrete gradient is defined with respect to a given direction and thus allows one to approximate a directional derivative of a given function. This approximation allows us to propose an effective algorithm for the calculation of a descent direction of a function at a given point. In general, the set of discrete gradients approximates the entire subdifferential, which can present only theoretical interest. For the computation of the descent direction, we need to calculate only a few discrete gradients at a given point, and the proposed algorithm for such a computation is definitive. The lack of continuity of the subdifferential and quasidifferential mappings creates difficulties in the study of methods for the minimization of locally Lipschitz functions. In [59], it was noted that the lack of this property was responsible for the failure of nonsmooth steepest descent algorithms. On

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call