The tasks assigned to neural network (NN) models are increasingly challenging due to the growing demand for their applicability across domains. Advanced machine learning programming skills, development time, and expensive assets are required to achieve accurate models, and they represent important assets, particularly for small and medium enterprises. Whether they are deployed in the Cloud or on Edge devices, i.e., resource-constrained devices that require the design of tiny NNs, it is of paramount importance to protect the associated intellectual properties (IP). Neural networks watermarking (NNW) can help the owner to claim the origin of an NN model that is suspected to have been attacked or copied, thus illegally infringing the IP. Adapting two state-of-the-art NNW methods, this paper aims to define watermarking procedures to securely protect tiny NNs’ IP in order to prevent unauthorized copies of these networks; specifically, embedded applications running on low-power devices, such as the image classification use cases developed for MLCommons benchmarks. These methodologies inject into a model a unique and secret parameter pattern or force an incoherent behavior when trigger inputs are used, helping the owner to prove the origin of the tested NN model. The obtained results demonstrate the effectiveness of these techniques using AI frameworks both on computers and MCUs, showing that the watermark was successfully recognized in both cases, even if adversarial attacks were simulated, and, in the second case, if accuracy values, required resources, and inference times remained unchanged.
Read full abstract