Abstract

Conventional federated learning (FL) approaches generally rely on a centralized server, and there has been a trend of designing asynchronous FL approaches for distributed applications partly to mitigate limitations associated with conventional (synchronous) FL approaches (e.g., single point of failure / attack). In this paper, we first introduce two new tools, namely: a quality-based aggregation method and an extended dynamic contribution broadcast encryption (DConBE). Building on these two new tools and local differential privacy, we then propose a privacy-preserving and reliable decentralized FL scheme, designed to support batch joining/leaving of clients while incurring minimal delay and achieving high model accuracy. In other words, our scheme seeks to ensure an optimal trade-off between model accuracy and data privacy, which is also demonstrated in our simulation results. For example, the results show that our aggregation method can effectively avoid low-quality updates in the sense that the scheme guarantees high model accuracy even in the presence of bad clients who may submit low-quality updates. In addition, our scheme incurs a lower loss and the extended DConBE only slightly affects the efficiency of our scheme. With the extended dynamic contribution broadcast encryption, our scheme can efficiently support batch joining/leaving of clients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call