In this article, we investigate the problem of federated learning (FL) in a communication-constrained environment of the Internet of Things (IoT), where multiple IoT clients train a global model collectively by communicating model updates with a central server instead of sending raw data sets. To ease the communication burden in IoT systems, several approaches have been proposed for the FL tasks, including sparsification methods and data quantization strategies. To overcome the shortcomings of the existing methods, we propose two new FL algorithms based on compressed sensing (CS) referred to as the CS-FL algorithm and the 1-bit CS-FL algorithm, both of which compress the upstream and downstream data while communicating between the clients and the central server. The proposed algorithms improve upon the existing algorithms by letting the clients send analog and 1-bit data, respectively, to the server after compression with a random measurement matrix. Based on that, in CS-FL and 1-bit CS-FL, the clients update the model locally utilizing the result of sparse reconstruction obtained by iterative hard thresholding (IHT) and binary IHT (BIHT), respectively. Experiments conducted on the MNIST and the Fashion-MNIST data sets reveal the superiority of the proposed algorithm over the baseline algorithms, SignSGD with a majority vote, FL based on sparse ternary compression, and FedAvg.
Read full abstract