Abstract

Subgradient methods (method A and method B) are considered for finding the minimum point of a convex function for the known optimal value of the function. Both methods use known Polyak’s step, which depends on scalar parameter m ≥ 1 and guarantees monotonous decrease of the distance to the minimum point. If m = 1 the methods A and B are applicable to arbitrary convex function. Parameter m > 1 permits to take into account special classes of convex functions – convex quadratic functions (m = 2), differentiable homogeneous of degree σ (m = σ), etc. For both methods theorems are proven about the convergence rate O 1/ √ k for arbitrary convex function and convergence with geometric progression rate for a convex function with acute minimum. Method A is a subgradient method with Polyak’s step in original space of variables. The parameter m = 1 corresponds to classical Polyak’s step for arbitrary convex function. Parameter m = 2 can be used for quadratic functions minimization, for which the value of Polyak’s step is doubled comparing to the classical Polyak’s step with m = 1. For onedimentional functions method A finds the minimum point in one step from the arbitrary start point, which corresponds to one iteration of the Newton’s method. Proof of theorems about convergence rate of the method A is based on convex function characteristics in original space of variables, among which a constant c1 plays a crucial role – it bounds a subgradient norm. Method B is a subgradient method with space transformation, where the Polyak’s step is calculated in space transformed by linear operator. The method is defined by nonsingular matrix B. The parameter m = 1 corresponds to classical Polyak’s step for arbitrary convex function in transformed space of variables. If m = 2, for one-dimentional quadratic function, the method B finds the minimum point in one step with both arbitrary matrix B and start point. Proof of theorems about convergence rate of the method B is based on function characteristics in transformed space of variables, which are analogous to the method A characteristics in original space.

Highlights

  • Розглядаються субградiєнтнi методи для знаходження точки мiнiмуму опуклої функцiї, якщо вiдоме її мiнiмальне значення

  • Якщо m = 2, то для одновимiрної квадратичної функцiї метод Поляка у перетвореному просторi (метод В) знаходить точку мiнiмуму за один крок при довiльних матрицi B та стартовiй точцi

  • Proof of theorems about convergence rate of the method B is based on function characteristics in transformed space of variables, which are analogous to the method A characteristics in original space

Read more

Summary

Introduction

Розглядаються субградiєнтнi методи (метод А та метод В) для знаходження точки мiнiмуму опуклої функцiї, якщо вiдоме її мiнiмальне значення. Глушкова НАН України, Київ, аспiрант vik.stovba@gmail.com ORCID: https://orcid.org/0000-0003-3023-5815 Глушкова НАН України, Київ, аспiрант zhmud17@gmail.com ORCID: https://orcid.org/0000-0002-4591-1110

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.