Abstract

In this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes. Finally, for the third-order methods, we suggest an efficient technique for solving the auxiliary problem, which is based on the recently developed relative smoothness condition (Bauschke et al. in Math Oper Res 42:330–348, 2017; Lu et al. in SIOPT 28(1):333–354, 2018). With this elaboration, the third-order methods become implementable and very fast. The rate of convergence in terms of the function value for the accelerated third-order scheme reaches the level Oleft( {1 over k^4}right) , where k is the number of iterations. This is very close to the lower bound of the order Oleft( {1 over k^5}right) , which is also justified in this paper. At the same time, in many important cases the computational cost of one iteration of this method remains on the level typical for the second-order methods.

Highlights

  • Motivation In the last decade, we observe an increasing interest to the complexity analysis of the high-order methods

  • In this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial

  • We did an important step towards practical implementation of tensor methods in unconstrained convex optimization

Read more

Summary

Introduction

Motivation In the last decade, we observe an increasing interest to the complexity analysis of the high-order methods. Such approximations were employed in the optimality conditions (see, for example [22]) It seems that the majority of attempts of using the high-order tensors in optimization methods failed by the standard obstacle related to the enormous complexity of minimization of nonconvex multivariate polynomials. The auxiliary optimization problem in the high-order (or tensor) methods becomes generally solvable by many powerful methods of Convex Optimization This fact explains our interest to complexity analysis of the simplest tensor scheme The most important results are included, where we discuss an efficient scheme for minimizing the regularized Taylor approximation of degree three This auxiliary convex problem can be treated in the framework of relative smoothness condition. Y−x p−1, which are valid for all x, y ∈ dom f

Convex tensor approximations
Accelerated tensor methods
Lower complexity bounds for tensor methods
Third-order methods: implementation details
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call