Abstract

Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy. Training is done using the raw data on the users’ devices, called clients. Only the training results, called gradients, are sent to the server to be aggregated which is used to generate an updated model. However, we cannot assume that the server can be trusted with the sensitive information, such as metadata related to the owner or source of the data. So, hiding client information from the server helps in reducing privacy-related attacks. Therefore, the privacy of the client’s identity, along with the privacy of the client’s data, is necessary to prevent such attacks. This paper proposes an efficient and privacy-preserving protocol for FL based on group signatures. A new group signature for federated learning, called GSFL, is designed to not only protect the privacy of the client’s data and identity but also significantly reduce the computation and communication costs considering the iterative process of federated learning. We show that GSFL outperforms existing approaches in terms of computation, communication, and signaling costs. Also, we show that the proposed protocol can handle various security attacks in the federated learning environment. Moreover, we provide security proof of our protocol using a formal security verification tool, Automated Validation of Internet Security Protocols and Applications (AVISPA).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call