Abstract

This paper aims to develop distributed learning algorithms for feedforward neural networks with random weights (FNNRWs) by using event-triggered communication schemes. Based on this scheme, the communication process of each agent is driven by a trigger condition where the agents exchange information in an asynchronous manner, only when it is crucially required. To this end, the centralized FNNRW problem is cast as a set of distributed subproblems with consensus constrains imposed on the desired parameters and solved following the discrete-time zero-gradient-sum (ZGS) strategy. An event-triggered communication scheme is introduced to the ZGS-based FNNRW algorithm in order to avoid unnecessary transmission costs. This is particularly useful for the case when communication resource is limited. It is proved that the proposed event-triggered approach is exponentially convergent if the design parameter is chosen properly under strongly connected and weight-balanced agent interactions. Two numerical simulation examples are provided to verify the effectiveness of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.