Abstract

Federated learning (FL) has been widely used in IoT applications. However, FL is vulnerable to various attacks in its each phase. Existing defense in federated learning mainly focus on the centralized setting. And centralized parameter-server settings require a trusted third party to collect and distribute model parameters. However, the requirement of a trusted third party cannot always be satisfied in many cases. Meanwhile, the centralized settings suffer from the inherent vulnerability of single-point-of-failure (SPOF), in which the whole system cease to function once the parameter server is broken. Therefore, decentralized federated learning has gain great attention recently.Existing conventional defense strategies are mostly designed for the centralized parameter-server architecture. The problem is that these conventional defense strategies cannot cope with new challenges occurred in highly decentralized settings of FL. Firstly, in a trustless setting, malicious participants can cause breakdown to the whole system on the communication level by disrupting model exchanges. Secondly, current defensive methods cannot effectively identify and rule out malicious participants. In either case, a harmful bias hurts the performance even if malicious participants do not perform model-level attacks. Therefore, defensive strategies for the decentralized-manner FL are in urgent need. To this end, we propose a committee-based FL system, named ComAvg, under a trustless setting. ComAvg provides a general coordination scheme for robust aggregation of distributed learning. With reliability assessment scheme to expel abnormal participants and fortified classic model exchange methods, the conventional centralized methods of FL can be easily modified into decentralized versions to cope with the two challenges aforementioned. Finally, we implement a prototype of ComAvg and perform various groups of evaluations on its robustness. The prototype-based evaluation results and theoretical analysis show that the proposed ComAvg is effective against model attacks such as sign-flipping and communication-level isolating attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.