Abstract

Data is readily available with the growing number of smart and IoT devices. However, application-specific data is available in small chunks and distributed across demographics. Also, sharing data online brings serious concerns and poses various security and privacy threats. To solve these issues, federated learning (FL) has emerged as a promising secure and collaborative learning solution. FL brings the machine learning model to the data owners, trains locally, and then sends the trained model to the central curator for final aggregation. However, FL is prone to poisoning and inference attacks in the presence of malicious participants and curious servers. Different Byzantine-robust aggregation schemes exist to mitigate poisoning attacks, but they require raw access to the model updates. Thus, it exposes the submitted updates to inference attacks. This work proposes a Byzantine-Robust and Inference-Resistant Federated Learning Framework using Permissioned Blockchain, called PrivateFL. PrivateFL replaces the central curator with the Hyperledger Fabric network. Further, we propose VPSA (Vertically Partitioned Secure Aggregation), tailored to PrivateFL framework, which performs robust and secure aggregation. Theoretical analysis proves that VPSA resists inference attacks, even if n−1 peers are compromised. A secure prediction mechanism to securely query a global model is also proposed for PrivateFL framework. Experimental evaluation shows that PrivateFL performs better than the traditional (centralized) learning systems, while being resistant to poisoning and inference attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call