Abstract

This article focuses on developing distributed optimization strategies for a class of machine learning problems over a directed network of computing agents. In these problems, the global objective function is an addition function, which is composed of local objective functions. Such local objective functions are convex and only endowed by the corresponding computing agent. A second-order Nesterov accelerated dynamical system with time-varying damping coefficient is developed to address such problems. To effectively deal with the constraints in the problems, the projected primal-dual method is carried out in the Nesterov accelerated system. By means of the cocoercive maximal monotone operator, it is shown that the trajectories of the Nesterov accelerated dynamical system can reach consensus at the optimal solution, provided that the damping coefficient and gains meet technical conditions. In the end, the validation of the theoretical results is demonstrated by the email classification problem and the logistic regression problem in machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call