Introduction. Methods of unconstrained optimization play a significant role in machine learning [1–6]. When solving practical problems in machine learning, such as tuning nonlinear regression models, the extremum point of the chosen optimality criterion is often degenerate, which significantly complicates its search. Therefore, degenerate problems are the most challenging in optimization. Known numerical methods for solving the general unconstrained optimization problem, up to the second order, have very low convergence rates when solving degenerate problems [7, 8]. This is explained by the fact that for a significant improvement in convergence rate in this case, it is necessary to use higher-order derivatives in the method than the second order [10]. The purpose of the paper is to develop an efficient quasi-Newton method for solving degenerate unconstrained optimization problems, the idea of which (unlike regularization) involves dividing the entire space into the sum of two orthogonal subspaces. This idea was introduced in [23]. The space division at each iteration of the method is based on the spectral decomposition of the matrix approximating the Hessian of the objective function using the BFGS formula [3]. Each subspace exhibits its own behavior of the objective function, and therefore, an appropriate minimization method is applied on it. Results. A combined quasi-Newton method is presented for solving degenerate unconstrained optimization problems, based on orthogonal decomposition of the Hessian approximation matrix and division of the entire space into the sum of two orthogonal subspaces. On one subspace (the kernel of the Hessian approximation matrix), a method is applied where derivatives in the direction of the 4th order are computed, while on the orthogonal complement to it, a quasi-Newtonian method is applied. A separate one-dimensional search is applied on each of these subspaces to determine the step multiplier in the respective direction. The effectiveness of the presented combined method is confirmed by numerical experiments conducted on widely accepted test functions for unconstrained optimization problems. The proposed method allows obtaining fairly accurate solutions to test tasks in case of degeneracy of the minimum point with significantly lower computational costs for gradient calculations compared to optimization procedures of well-known mathematical packages. Conclusions. The idea of dividing the entire space into the sum of two (or possibly more) orthogonal subspaces when solving complex optimization problems is quite promising in terms of applying a combination of different numerical methods on separate subspaces. In the future, it is planned to conduct theoretical research on the convergence rate of the presented combined method in solving degenerate unconstrained optimization problems. Keywords: unconstrained optimization, quasi-Newton methods, degenerate minimum point, spectral matrix decomposition, Machine Learning.
Read full abstract