Abstract

Federated learning, known for its emphasis on privacy and resource efficiency, has emerged as a transformative paradigm in the fields of artificial intelligence and industrial machine learning. However, with the advancement and widespread adoption of this technology, the concern of model theft in federated learning models becomes increasingly prominent. While current endeavors concentrate on protecting individual deep learning models, limited attention has been dedicated to those involved in federated learning, especially within decentralized environments. To fill this void, we introduce PersistVerify, an innovative approach for verifying model copyright within the federated learning framework. PersistVerify integrates boundary sample selection and spatial attention mechanisms, enhancing the robustness and confidentiality of neural network backdoor watermarks to ensure secure verification of model ownership. PersistVerify constructs a confidential and robust watermark dataset through three sequential steps: latent representation extraction, boundary sample selection, and spatial attention-based trigger embedding. Our research addresses the pervasive challenge of reliable model ownership verification in federated learning, demonstrating the efficacy of PersistVerify. This study provides valuable insights and methodologies to maintain the confidentiality and reliability of neural network backdoor watermarks in the domains of artificial intelligence and federated learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call