Abstract

Federated learning (FL) is widely used because it is effective at enhancing data privacy. However, there will be many problems in the FL training process, such as poor performance of training models and the model converging too slowly, as the data is typically heterogeneous and the computing capabilities of the participants device are different. Here, we proposed an optimized FL model paradigm, that applies model arithmetic prediction to prevent the training process's inefficiency due to the participants' limited computational resources. The proposed formula for participant selection is based on posterior probabilities and correlation coefficients, which have been validated to reduce data noise and enhance the effect of central model aggregation. In addition, high-quality participant models are selected based on posterior probability, combined with correlation coefficients, which allows the server model to aggregate as many better-performing participant models as possible, meanwhile avoiding the impact of participants with too much data noise. During the aggregation step, the model loss values and the participant training delay are used to weight factors for participant devices, which accelerates FL convergence and improves model performance. Data heterogeneity and non-IID are fully taken into consideration in the method we proposed. Finally, these results have been verified by extensive experimental, we demonstrate better performance in the presence of non-IID data, especially affective computing. Compared with previous research, reduces training latency by 4 seconds, and the model accuracy is increased by 10% on average.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.