Abstract

Federated learning (FL) has emerged with increasing popularity in the medical image analysis field. In collaborative model training, it provides a privacy-preserving scheme by keeping data localized. In FL frameworks, instead of collecting data from clients, the server learns a global model by aggregating local training models from clients and broadcasts the updated model. However, in the situation where data is not identically and independently distributed (non-i.i.d), the model aggregation requires frequent message passing, which may face the communication bottleneck. In this paper, we propose a communication-efficient FL framework based on the adaptive server-client model transmission. The local model in the client will only be uploaded to the server under the conditions of (1) a probability threshold and (2) an informative model updating threshold. Our framework also tackles the data heterogeneity in federated networks by involving a proximal term. We evaluate our approach on a simulated multi-site medical image dataset for diabetic retinopathy (DR) rating. We demonstrate that our framework not only maintains the accuracy on non-i.i.d dataset but also provides a significant reduction in communication cost compared to other FL algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.