The fundamental communication paradigms in the next-generation mobile networks are shifting from connected things to connected intelligence. The potential result is that current communication-centric wireless systems are greatly stressed when supporting computation-centric intelligent services with distributed big data. This is one reason that makes federated learning come into being, it allows collaborative training over many edge devices while avoiding the transmission of raw data. To tackle the problem of model aggregation in federated learning systems, this article resorts to multiple reconfigurable intelligent surfaces (RISs) to achieve efficient and reliable learning-oriented wireless connectivity. The seamless integration of communication and computation is actualized by over-the-air computation (AirComp), which can be deemed as one of the uplink nonorthogonal multiple access (NOMA) techniques without individual information decoding. Since all local parameters are uploaded via noisy concurrent transmissions, the unfavorable propagation error inevitably deteriorates the accuracy of the aggregated global model. The goals of this work are to 1) alleviate the signal distortion of AirComp over shared wireless channels and 2) speed up the convergence rate of federated learning. More specifically, both the mean-square error (MSE) and the device set in the model uploading process are optimized by jointly designing transceivers, tuning reflection coefficients, and selecting clients. Compared to baselines, extensive simulation results show that 1) the proposed algorithms can aggregate model more accurately and accelerate convergence and 2) the training loss and inference accuracy of federated learning can be improved significantly with the aid of multiple RISs.
Read full abstract