With deep learning techniques achieving great results in modern industry, the intellectual property (IP) protection for deep learning models has attracted the attention of academics and engineers. However, training a commercially viable deep learning model usually needs professional resources and time. Once a malicious user clones, illegally distributes and uses the model, it can infringe on the model owner's IP and even steal its market share. Among the existing IP protection methods, scholars prefer the black-box watermarking approaches, of which the content of the trigger set and the label are the key part of the watermarking technique. However, most schemes do not consider the security and invisibility of the trigger set, which allows attackers to easily trigger the model by creating a fake trigger set, thereby committing a fraudulent ownership claim attack and claiming the ownership belongs to themselves. To overcome these drawbacks, we proposed a spatiotemporal chaotic data annotation method. Firstly, the unpredictability and acyclicity of chaos make the model resistant to fraudulent ownership claim attacks, statistical inference and other common machine learning attacks; Secondly, the trigger set and parameters are independent of each other, guaranteeing the security of the key; Thirdly, the spatiotemporal chaotic system provides a large key space, which meets the commercialization needs of deep learning models. Theoretical analysis and experimental results show that our scheme has security, practicality and robustness. To further validate the superiority of the proposed method, we also compare it with the Logistic chaotic annotation watermarking-based method, and the results show that our method performs better in terms of robustness, effectiveness, completeness, fidelity, security and practicality.
Read full abstract