Abstract

The combination of big data and machine learning brings more convenience to people, but also brings security risks of data privacy leakage. The services provided by traditional machine learning can no longer meet the needs of privacy protection. The emergence of federated learning technology has alleviated privacy disclosure threats, however adversaries can still infer from the data model or even reconstruct the raw training data, causing the data privacy of the raw training data to be leaked. To solve this problem, we propose a secure federated learning mechanism based on variational autoencoder (VAE) to resist inference attacks. Participants use raw data to generate forged data through a VAE and train a local model with forged data, thereby protecting the data privacy and guaranteeing the quality of the global model. The experimental results show that the proposed secure federated learning mechanism can guarantee the high accuracy of the global model while reducing the probability of the raw data of the participants being reconstructed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call