Abstract

Federated learning (FL) is a pivotal catalyst for enabling large-scale privacy-preserving distributed machine learning (ML). By eliminating the need for local raw dataset sharing, FL substantially reduces privacy concerns and alleviates the isolated data problem. However, in reality, the success of FL is predominantly attributed to a centralized framework called FedAvg [1], in which workers are responsible for model training, and servers are in control of model aggregation. Nevertheless, FedAvg's centralized worker-server architecture has raised new concerns, including low scalability of the cluster, risk of data leakage, and central server failure or even defection. To overcome these challenges, we propose Decentralized Federated Trusted Averaging (DeFTA), a decentralized FL framework that serves as a plug-and-play replacement for FedAvg, bringing instant improvements to security, scalability, and fault-tolerance in the federated learning process. In essence, it primarily consists of a novel model aggregating formula with theoretical performance analysis, and a decentralized trust system (DTS) to significantly enhance system robustness. Extensive experiments conducted on six datasets and six basic models suggest that DeFTA not only exhibits comparable performance with FedAvg in a more realistic setting, but also achieves remarkable resilience even when 67% of workers are malicious.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call