Abstract

Federated learning (FL) has become an emerging distributed framework to build deep learning models with collaborative efforts from multiple participants. Consequently, copyright protection of FL deep model is urgently required because too many participants have access to the joint-trained model. Recently, Secure FL framework is developed to address data leakage issue when central node is not fully trustable. This encryption process has made existing DL model watermarking schemes impossible to embed watermark at the central node. In this paper, we propose a novel client-side Federated Learning watermarking method to tackle the model verification issue under the Secure FL framework. In specific, we design a backdoor-based watermarking scheme to allow model owners to embed their pre-designed noise patterns into the FL deep model. Thus, our method provides reliable copyright protection while ensuring the data privacy because the central node has no access to the encrypted gradient information. The experimental results have demonstrated the efficiency of our method in terms of both FL model performance and watermarking robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call