Abstract
The world has suffered a lot from the COVID-19 pandemic. Though vaccines have been developed, we still need to be ready for its variants and other possible pandemics in the future. To provide people with pandemic risk assessments without violating privacy, a Federated Learning (FL) framework is envisioned. However, most existing FL frameworks can only work for homogeneous models, i.e., models with the same configuration, ignoring the preferences of the users and the various properties of their devices. To this end, we propose a novel two-way knowledge distillation-based FL framework, Fed2KD. The knowledge exchange between the global and local models is achieved by distilling the information into or out from a tiny model with unified configuration. Nonetheless, the distillation cannot be conducted without a common dataset. To solve this bottleneck, we leverage the Conditional Variational Autoencoder (CVAE) to generate data that will be used as a proxy dataset for distillation. The proposed framework is firstly evaluated on benchmark datasets (MNIST and FashionMNIST) to test its performance against existing models such as Federated Averaging (FedAvg). The performance of Fed2KD improves by up to 30% on MNIST dataset, and 18% on FashionMNIST when data is non-independent and identically distributed (non-IID) as compared to FedAvg. Then, Fed2KD is evaluated on the pandemic risk assessment tasks through a mobile APP we developed, namely DP4coRUna, which provides indoor risk prediction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.