Abstract

Federated learning is a privacy-aware collaborative machine learning method where the clients collaborate on constructing a global model by performing local model training using their training data and sending the local model updates to the server. Although it enhances privacy by letting the clients collaborate without sharing their training data, it is still prone to sophisticated privacy attacks because of possible information leakage from the local model updates sent to the server. To prevent such attacks, generally secure aggregation protocols are proposed so that the server will not be able to access the individual local model updates but the aggregated result. However, such secure aggregation approaches may not allow the execution of security mechanisms against some security attacks to model training, such as poisoning and backdoor attacks, because the server cannot access the individual local model updates and; therefore, cannot analyze them to detect anomalies resulting from these attacks. Thus, solutions that satisfy privacy and security at the same time or new privacy-preserving solutions that allow the server to execute some analysis on the local model updates without violating privacy are needed for federated learning. In this paper, we introduce a novel security-friendly privacy solution for federated learning based on multi-hop communication to hide clients’ identities. Our solution ensures that the forwardee clients in the path between the source client and the server cannot execute malicious activities by altering model updates and contributing to the global model construction with more than one local model update in one FL round. We then propose two different approaches to make the solution also robust against possible malicious packet drop behaviors by the forwardee clients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call