Abstract

As an efficient learning model, the broad learning system (BLS) has achieved great success in machine learning and various applications due to its outstanding performance. Differing from typical deep learning based models, the BLS has a more straightforward structure and few parameters. In this study, we focus on developing a decentralized version of the elastic-net BLS (D-ENBLS) with communication efficiency. The scenario we considered is that the training data is distributed throughout a network of interconnected agents, which prohibits sharing the raw data owing to resource limitations or privacy concerns. In such a distributed paradigm, the communication between agents is an important issue to investigate. From the perspective of saving communication resources, we introduce quantization and communication censoring strategies on the D-ENBLS algorithm to improve the communication procedure and minimize the communication cost with minimal performance degradation. The novel algorithm refers to as DQC-ENBLS, in which quantization reduces the number of bits for each transmission, whereas communication censoring reduces the total number of transmissions. By formulating the training problem as a finite-sum minimization, the alternating multiplication method (ADMM) is employed to solve the optimization problem in a decentralized manner. The experimental results verify that the DQC-ENBLS algorithm can reduce communication cost while maintaining similar performance on the test datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call