Abstract

A significant novel approach in distributed ML, Federated Learning (FL), enables multiple parties to work simultaneously on developing models while securing the confidentiality of their unique datasets. There are issues regarding privacy with FL, particularly for models that are being trained, because private information can be accessed from shared gradients or updates to the model. This investigation proposes SecureHE-Fed, a novel system that improves FL’s defense against attacks on privacy through the use of Homomorphic Encryption (HE) and Zero-Knowledge Proofs (ZKP). Before data from clients becomes involved in the learning procedure, SecureHE-Fed encrypts it. The following lets us determine encrypted messages without revealing the data as it is. As an additional security test, ZKP is employed to verify if modifications to models are valid without sharing the true nature of the information. By evaluating SecureHE-Fed with different FL techniques, researchers demonstrate that it enhances confidentiality while maintaining the precision of the model. The results of this work obtained validate SecureHE-Fed as a secure and scalable FL approach, and we recommend its use in applications where user confidentiality is essential.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call