Abstract

Abstract This paper aims to propose a distributed cooperative learning (DCL) algorithm for Radial Basis Function Network (RBFN), where training samples are often extremely large-scale, high-dimensional and located on distributed nodes over undirected and connected networks. The algorithm is investigated for RBFN with different basic functions, such as Gaussian function, multi-quadric function, thin plate spline function and so on. The proposed DCL–RBFN algorithm is based on Zero-Gradient-Sum (ZGS) distributed optimization strategy and works in a fully distributed fashion, which has no requirement on central node and communication of raw data from each node during the training process. Moreover, in order to avoid the waste of unnecessary network resources, an event-triggered communication mechanism is applied to our algorithm. Each node exchanges learned information with its neighboring nodes using the ZGS strategy under event-triggered mechanism. The convergence of the proposed DCL–RBFN and event-triggered DCL–RBFN algorithms are presented by using the Lyapunov method. It is proved that the proposed algorithms have exponential convergence speed and require less communication data over networks. Finally, a series of numerical simulations are given to verify the advantages of the proposed algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call