Abstract
Decentralized Federated Learning (DFL) is a prevalent approach to efficiently train deep learning models and preserve privacy by sharing model gradients instead of the local data. However, participants in the DFL may opt to adopt a dynamic behavior for personal gains. The existing DFL models cannot differentiate between the adaptive behavior of the participants in the massively distributed environments and assume that all the participants are honest. As a result, free riders or malicious participants remain undetected and not penalized. In this paper, we present a DFL architecture where decentralized participants assess the behavior of each other using the quality of gradients. A novel dynamic reputation assessment protocol is implemented to detect and eliminate participants with adaptive behavior. The proposed architecture is evaluated using behavior-based attacks in a decentralized environment by increasing the percentage of adaptive participants from 10% to 40%. The results show that our proposed protocol can effectively detect and eliminate participants with adaptive behavior from the DFL in only two rounds whereas centralized federated learning fails to detect behavior-based attacks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.