Abstract

Federated learning (FL) models are constructed by multiple participants who provide their training datasets and collaborate in joint training. However, training and deployment processes have encountered various challenges in terms of intellectual property protection, such as illegal theft and data leakage. Existing FL protection frameworks focus on each client independently and verify the model ownership. When they are under collusion attacks (i.e., when multiple clients negotiate to steal together), they cannot accurately identify the clients that have stolen the model. To address this challenge, a novel watermarking protection scheme against collusion attacks for federated learning is proposed in this work. It employs anti-collusion coding to design unique watermark information for each client, which can effectively detect colluders. Furthermore, it utilizes a specific regularized loss function for watermark information embedding along with the incorporation of skip connections to embed the watermark information within each batch normalization layer. The experimental results demonstrated that embedding different watermark information into each client did not affect the accuracy of the original task. The accuracy was approximately 100% when identifying the colluders. The embedding and extraction times for the original task were only 1.53% and 0.29%, respectively. Further, it exhibited high robustness against various common attacks, including fine-tuning, shearing, and collusion attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call