Abstract

Most of the current algorithms for solving distributed online optimization problems are based on the first-order method, which are simple in computation but slow in convergence. Newton’s algorithm with fast convergence speed needs to calculate the Hessian matrix and its inverse, leading to computationally complex. A distributed online optimization algorithm based on Newton’s step is proposed in this paper, which constructs a positive definite matrix by using the first-order information of the objective function to replace the inverse of the Hessian matrix in Newton’s method. The convergence of the algorithm is proved theoretically and the regret bound of the algorithm is obtained. Finally, numerical experiments are used to verify the feasibility and efficiency of the proposed algorithm. The experimental results show that the proposed algorithm has an efficient performance on practical problems, compared to several existing gradient descent algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call