Abstract

Abstract This paper aims to design a fast convergent distributed cooperative learning (DCL) algorithm for feedforward neural networks with random weights (FNNRWs) over undirected and connected networks. First, a continuous-time fast convergent DCL algorithm is proposed, whose finite-time convergence is guaranteed based on the Lyapunov method. Second, we extend this algorithm to a discrete-time form by using the fourth-order Runge–Kutta method. Compared with the distributed alternating direction method of multipliers (ADMM) and the Zero-Gradient-Sum-based (ZGS-based) algorithms, the proposed algorithm has high learning capability and convergence speed. Simulation results demonstrate that the proposed algorithm has fast convergence rate, and the convergence rate may be adjusted by properly selecting some tuning parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call