Abstract

Federated learning (FL), collaboratively training a shared global model without exchanging and centralizing local data, provides a promising solution for privacy preserving. On the other hand, it is faced with two main challenges: First, high communication cost, and second, low model quality due to imbalanced or nonindependent and identically distributed (non-IID) data. In this article, we propose FedVAE, an FL framework based on variational autoencoder (VAE) for remote patient monitoring. FedVAE contains two lightweight VAEs: one for projecting data onto a lower dimensional space with similar distribution so as to alleviate the issues of excessive communication overhead and slow convergence rate caused by non-IID data, and the other for shunning training bias due to imbalanced data distribution through generating minority class samples. In general, the proposed FedVAE can improve the overall performance of FL models while consuming only a small amount of communication bandwidth. The experimental results show that the area under the curve (AUC) value of FedVAE can reach 0.9937, which is even higher than that of the traditional centralized model (0.9931). Besides, fine-tuning the global model with personalization can raise the average AUC to 0.9947. Moreover, compared with vanilla FL, FedVAE shows 0.87% improvement in AUC while reducing communication traffic by at least 95%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call