Abstract

Federated unlearning has emerged very recently as an attempt to realize "the right to be forgotten" in the context of federated learning. While the current literature is making efforts on designing efficient retraining or approximate unlearning approaches, they ignore the information leakage risks brought by the discrepancy between the models before and after unlearning. In this paper, we perform a comprehensive review of prior studies on federated unlearning and privacy leakage from model updating. We propose new taxonomies to categorize and summarize the state-of-the-art federated unlearning algorithms. We present our findings on the inherent vulnerability to inference attacks of the federated unlearning paradigm and summarize defense techniques with the potential of preventing information leakage. Finally, we suggest a privacy preserving federated unlearning framework with promising research directions to facilitate further studies as future work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call