Abstract
Federated Learning (FL) breaks the “data island” and lets clients cooperate in training a shared model with private data locally. And hierarchical framework is used in FL to alleviate the problem of excessive communication overhead. FL provides a certain degree of privacy, but privacy inference and Byzantine attacks still affect the existing FL methods. Moreover, the parameter server brings new security troubles, and the group size limits the model’s robustness. Thus, this paper presents a new decentralized FL framework called Trustiness-based Hierarchical Decentralized Federated Learning (TH-DFL) via a Security Robust Aggregation (SRA) rule, which introduces a trust mechanism. This study provides privacy protection and robustness even if there are malicious nodes. Besides, it somewhat reduces communication overhead. A series of comparative studies are also implemented to evaluate the performance of the model. In this process, the Gaussian, the bit-flipping, and other attacks are launched in the simulated federated learning. The results show that on the basis of ensuring privacy and robustness, TH-DFL seeks a better balance between privacy and robustness when the size of the group changes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.