Abstract
SummaryVarious deep neural network (DNN) model watermarks have been proposed by researchers to verify copyrights for deep neural networks DNN. However, most DNN watermarking methods cannot prevent attackers from stealing and using the model. Unlike many existing approaches, this paper uses a channel pruning algorithm to protect DNN models, which verifies DNN models copyrights but also prevents the illegal use of DNN models. In this work, the pruning threshold or pruning rate is used as the secret key of a DNN model. After the secret key is distributed to multiple users, they prune the DNN model with the secret key, and the pruned and fine‐tuned model is provided to the users. The users can verify ownership of the model according to the pruning accuracy and fine‐tuning accuracy. If the secret key is incorrect, the accuracy of the model after fine‐tuning will be very low, and users will be unable to use the reasoning function of the fine‐tuned model. Based on the CIFAR‐10 and CIFAR‐100 datasets, we conducted experiments on five popular DNN models. The experimental results show that we can authorize multiple users by pruning very few channels in the convolution layers of the DNN model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Concurrency and Computation: Practice and Experience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.