Abstract

Federated learning is a machine learning method that can break the data island. Its inherent privacy-preserving property has an important role in training medical image models. However, federated learning requires frequent communication, which incur high communication costs. Moreover, the data is heterogeneous due to different users’ preferences, which may degrade the performance of models. To address the problem of statistical heterogeneity, we propose FedUC, an algorithm to control the uploaded updates for federated learning, where a client scheduling method is made on the basis of weight divergence, update increment, and loss. We also balance the local data of the clients by image augmentation to mitigate the impact of the non-independently identically distribution. The server assigns compression thresholds to the clients based on the weight divergence and update increment of the models for gradient compression to reduce the wireless communication costs. Finally, based on the weight divergence, update increment and accuracy, the server dynamically assigns weights to the model parameters for the aggregation. Simulation and analysis utilizing a publicly available chest disease dataset containing COVID-19 are compared with existing federated learning methods. Experimental results show that our proposed strategy has better training performance in improving model accuracy and reducing wireless communication costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call