Abstract

In this paper, we propose a finite-time discrete distributed learning algorithm, denoted as the FTDDL algorithm, using the Stochastic Configuration Network (SCN) and Zero Gradient Sum (ZGS) strategy. In distributed scenarios, the communication networks that store data are modeled as algebraic graphs and the learning problems are formulated as distributed optimization problems. The FTDDL algorithm persists the universal approximation of the SCN learner. The process of the FTDDL algorithm is divided into two phases, namely “distributed hidden parameters configuration” and “distributed output weights determination”. It has been theoretically proved that the FTDDL algorithm works in distributed manner and achieves the same efficiency as the centralized SCN. According to the FTDDL algorithm, each node over the communication network transmits the updated output weights of the SCN with its direct neighbors. Thus, the FTDDL algorithm preserves data privacy and saves network bandwidth. The FTDDL algorithm is theoretically proved to converge to the global optimal output weights of the SCN within finite time. Compared with the other distributed learning algorithms using SCNs, the FTDDL algorithm can significantly reduce the iteration times. At last, some experiments are shown in Simulations part to verify the effectiveness of the FTDDL algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call