Abstract

Federated Learning is a collaborative machine learning paradigm that allows training models on decentralized data while preserving data privacy. It has gained significant attention due to its potential applications in various domains. However, the issue of protecting model copyright in the Federated Learning setting has become a critical concern. In this paper, we propose a novel watermarking framework called FedCRMW (Federal Learning Compression-Resistance Model Watermark) to address the challenge of model copyright protection in Federated Learning. FedCRMW embeds unique watermarks into client-contributed models, ensuring ownership, integrity, and authenticity. The framework leverages client-specific identifiers and exclusive logos to construct trigger sets for watermark embedding, enhancing security and traceability. One of the key advantages of FedCRMW is its optimization for the common data compression challenge in the Federated Learning scenario. By utilizing compressed data inputs for copyright verification, we achieve an efficient watermark validation process and reduce communication and storage overheads. Experimental results demonstrate the effectiveness of FedCRMW in terms of watermark success rate, imperceptibility, robustness against attacks, and resistance to model compression and pruning. Compared to existing watermarking methods, FedCRMW exhibits superior performance in the Federated Learning context.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.