Abstract

Artificial intelligence (AI)-based cybersecurity services offer significant promise in many scenarios, including malware detection, content supervision, and so on. Meanwhile, many commercial and government applications have raised the need for intellectual property protection of using deep neural network (DNN). Existing studies (e.g., watermarking techniques) on intellectual property protection only aim at inserting secret information into DNNs, allowing producers to detect whether the given DNN infringes on their own copyrights. However, since the availability protection of learning models is rarely considered, the piracy model can still work with high accuracy. In this paper, a novel model locking (M-LOCK) scheme for the DNN is proposed to enhance its availability protection, where the DNN produces poor accuracy if a specific token is absent, while it maps only the tokenized inputs into correct predictions. The proposed scheme performs the verification process during the DNN inference operation, actively protecting models' intellectual property copyright at each query. Specifically, to train the token-sensitive decision-making boundaries of DNNs, a data poisoning-based model manipulation (DPMM) method is also proposed, which minimizes the correlation between the dummy outputs and correct predictions. Extensive experiments demonstrate the proposed scheme could achieve high reliability and effectiveness across various benchmark datasets as well as typical model protection methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call