Abstract

Background: Face recognition which belongs to biometric recognition has great application value. Nowadays, face recognition based on deep learning has been widely used in many fields such as internet payment, network login and authentication. However, the face recognition deep learning model are easily replaced and tampered with. Once the models are illegally attacked, it will infringe the intellectual property rights of the model owner and cause economic losses. To deal with these threats, we use watermarking technology to add identity into the face recognition deep learning model. When it is replaced or tampered with, we can prove that the model belongs to us by extracting the watermarks. Objective: In this study, our innovate framework is designed to add watermarks into the face recognition deep learning model as identity, which makes it have features of both trigger sets and data sets. The model will be robust enough to resist common machine learning attacks. With special watermarks, its ownership can be guaranteed. Method: We construct a special watermark trigger set and embed it into the model, which makes it trained without human intervention and annotation. To be flexible for a variety of applications, this scheme uses chaotic sequences to label a watermark trigger set, which guarantees the non-generalization of the watermark. The initial value and parameters used in the method are designed respectively as key to the model. We train 4 models with different number of trigger samples, which is used to study the effect of the number of trigger samples on the model accuracy. Results: We successfully propose a watermarking method for adding identity to the face recognition deep learning model. Watermark extraction rate of the proposed framework is 100%, which means our method can successfully prove ownership of the face recognition deep learning model. In destructive experiments, Models subject to fine-tuning attack still have high face recognition rates which are over 99.00%, and extraction rates of watermarks of each model is 100%. Under overwriting attack, the extraction rates of watermarks of models are less than 25%, models cannot maintain the original performance, which means that watermarks can provide protection until the model loses its ability. The experimental results indicate that the proposed scheme is robust against common machine learning attacks and it prevent the model from being replaced and tempering with. Conclusion: The robustness of the proposed method is capable of resisting machine learning attacks and fine-tuning attacks. It also provides good fidelity, safety, practicality, completeness and effectiveness. With the help of special watermarks, related departments can effectively manage face recognition deep learning models. Besides, it can facilitate the commercialization of intelligent models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call