Abstract

Federated learning (FL) has emerged with increasing popularity in the medical image analysis field. In collaborative model training, it provides a privacy-preserving scheme by keeping data localized. In FL frameworks, instead of collecting data from clients, the server learns a global model by aggregating local training models from clients and broadcasts the updated model. However, in the situation where data is not identically and independently distributed (non-i.i.d), the model aggregation requires frequent message passing, which may face the communication bottleneck. In this paper, we propose a communication-efficient FL framework based on the adaptive server-client model transmission. The local model in the client will only be uploaded to the server under the conditions of (1) a probability threshold and (2) an informative model updating threshold. Our framework also tackles the data heterogeneity in federated networks by involving a proximal term. We evaluate our approach on a simulated multi-site medical image dataset for diabetic retinopathy (DR) rating. We demonstrate that our framework not only maintains the accuracy on non-i.i.d dataset but also provides a significant reduction in communication cost compared to other FL algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call