Abstract

Federated learning (FL) is widely used because it is effective at enhancing data privacy. However, there will be many problems in the FL training process, such as poor performance of training models and the model converging too slowly, as the data is typically heterogeneous and the computing capabilities of the participants device are different. Here, we proposed an optimized FL model paradigm, that applies model arithmetic prediction to prevent the training process's inefficiency due to the participants' limited computational resources. The proposed formula for participant selection is based on posterior probabilities and correlation coefficients, which have been validated to reduce data noise and enhance the effect of central model aggregation. In addition, high-quality participant models are selected based on posterior probability, combined with correlation coefficients, which allows the server model to aggregate as many better-performing participant models as possible, meanwhile avoiding the impact of participants with too much data noise. During the aggregation step, the model loss values and the participant training delay are used to weight factors for participant devices, which accelerates FL convergence and improves model performance. Data heterogeneity and non-IID are fully taken into consideration in the method we proposed. Finally, these results have been verified by extensive experimental, we demonstrate better performance in the presence of non-IID data, especially affective computing. Compared with previous research, reduces training latency by 4 seconds, and the model accuracy is increased by 10% on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call