Abstract

Nowadays, many edge computing service providers expect to leverage the computational power and data of edge nodes to improve their models without transmitting data. Federated learning facilitates collaborative training of global models among distributed edge nodes without sharing their training data. Unfortunately, existing privacy-preserving federated learning applied to this scenario still faces three challenges: 1) It typically employs complex cryptographic algorithms, which results in excessive training overhead; 2) It cannot guarantee Byzantine robustness while preserving data privacy; and 3) Edge nodes have limited computing power and may drop out frequently. As a result, the privacy-preserving federated learning cannot be effectively applied to edge computing scenarios. Therefore, we propose a lightweight and secure federated learning scheme LSFL, which combines the features of privacy-preserving and Byzantine-Robustness. Specifically, we design the Lightweight Two-Server Secure Aggregation protocol, which utilizes two servers to enable secure Byzantine robustness and model aggregation. This scheme protects data privacy and prevents Byzantine nodes from influencing model aggregation. We implement and evaluate LSFL in a LAN environment, and the experiment results show that LSFL meets fidelity, security, and efficiency design goals, and maintains model accuracy compared to the popular FedAvg scheme.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.