Nowadays, many edge computing service providers expect to leverage the computational power and data of edge nodes to improve their models without transmitting data. Federated learning facilitates collaborative training of global models among distributed edge nodes without sharing their training data. Unfortunately, existing privacy-preserving federated learning applied to this scenario still faces three challenges: 1) It typically employs complex cryptographic algorithms, which results in excessive training overhead; 2) It cannot guarantee Byzantine robustness while preserving data privacy; and 3) Edge nodes have limited computing power and may drop out frequently. As a result, the privacy-preserving federated learning cannot be effectively applied to edge computing scenarios. Therefore, we propose a lightweight and secure federated learning scheme LSFL, which combines the features of privacy-preserving and Byzantine-Robustness. Specifically, we design the Lightweight Two-Server Secure Aggregation protocol, which utilizes two servers to enable secure Byzantine robustness and model aggregation. This scheme protects data privacy and prevents Byzantine nodes from influencing model aggregation. We implement and evaluate LSFL in a LAN environment, and the experiment results show that LSFL meets fidelity, security, and efficiency design goals, and maintains model accuracy compared to the popular FedAvg scheme.
Read full abstract