Abstract

Privacy-preserving and Byzantine-resilient machine learning has been an important research issue, and many centralized methods have been developed. However, it is difficult for these methods to achieve fast learning and high accuracy simultaneously. In contrast, federated learning based on local model masking like Byzantine-Resilient Secure Aggregation (BREA), is a promising approach to simultaneously achieve them. Despite the advantage of light computation of randomizing local models of users for privacy preservation, the verification of shares generated from local models in BREA, which mitigates Byzantine attacks, still incurs large complexity in communication. The paper designs a share verification method for BREA to offload some parts of the share verification process from users to a semi-honest server, which avoids broadcasting large-size commitments to shares. In addition, to mitigate the increase in computation time due to computations offloaded to the server, our method makes the verification algorithm running on the server efficient and executes the server and user computations in parallel. In our experiments, our method provides a speedup of up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$15\times $ </tex-math></inline-formula> on low-bandwidth networks like mobile networks. Our method also preserves BREA’s resilience against Byzantine attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.