As a novel distributed learning framework for protecting personal data privacy, federated learning, (FL) has attained widespread attention through sharing gradients among users without collecting their data. However, an untrusted cloud server may infer users’ individual information from gradients and global model. In addition, it may even forge incorrect aggregated results to save resources. To deal with these issues, despite that the existing works can protect local model privacy and achieve verifiability of aggregated results, they are defective in protecting global model privacy, guaranteeing verifiability if collusion attacks occur, and suffer from high computation cost. To further tackle the above challenges, a verifiable and collusion-resistant secure aggregation scheme for FL is proposed, named VCSA. Concretely, we combine symmetric homomorphic encryption with single masking to protect model privacy. Meanwhile, we adopt verifiable multi-secret sharing and generalized Pedersen commitment to achieve verifiability and prevent users from uploading incorrect shares. Furthermore, high model accuracy can be ensured even if some users go offline. Security analysis illustrates that our VCSA enhances the security of FL, realizes verifiability despite collusion attacks and robustness to dropout. Performance evaluation displays that our VCSA can reduce at least 28.27% and 79.15% regarding computation cost compared to existing schemes.
Read full abstract