Abstract

With the popularity of IoT (Internet of Things) applications, edge computing has received lots of attention. To meet data privacy protection requirements of edge nodes and cope with their unbalanced data distribution, federated learning (FL), a distributed learning framework, is widely used in intelligent edge computing applications. However, recent studies have shown that FL still suffers from privacy leakage problems, including membership inference, data reconstruction, etc. However, these studies mainly focus on the feature information of private data. In this paper, we concern the user-level label privacy in FL. We propose LDIA, a label distribution inference attack against FL in edge computing, exploring the possibility that an honest but curious cloud server can infer the proportions of samples per label in the edge user’s private data. LDIA is inspired by the observation that parameter changes in the output layer of a model can reflect the label distribution of training data. We use a neural network to learn individual features of the output layer updates over different label distributions, and then perform inference from local models uploaded by users. Our comprehensive evaluation shows that LDIA is effective on various datasets in different settings, demonstrating the severe privacy leakage in FL-based edge computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call