Abstract

Artificial intelligence (AI) algorithms achieve outstanding results in many application domains such as computer vision and natural language processing. The performance of AI models is the outcome of complex and costly model architecture design and training processes. Hence, it is paramount for model owners to protect their AI models from piracy – model cloning, illegitimate distribution and use. IP protection mechanisms have been applied to AI models, and in particular to deep neural networks, to verify the model ownership. State-of-the-art AI model ownership protection techniques have been surveyed. The pros and cons of AI model ownership protection have been reported. The majority of previous works are focused on watermarking, while more advanced methods such fingerprinting and attestation are promising but not yet explored in depth. This study has been concluded by discussing possible research directions in the area.

Highlights

  • The amount of data collected from all kinds of personal devices reaches staggering levels

  • The expectation is in the capacity of artificial intelligence (AI) algorithms to leverage the large amount of data and learn to perform tasks commonly associated to intelligent beings, reliably and automatically [1]

  • It is possible that the adversary has gained access illegitimately, for example, by manipulating an edge or IoT device with embedded AI that is deployed in a hostile environment

Read more

Summary

| INTRODUCTION

The amount of data collected from all kinds of personal devices reaches staggering levels. Attacks conceived against AI systems have been proven to be effective and are becoming a real threat, as indicated by some examples recently reported in scientific studies. The rationale behind the assumption of availability of such trusted hardware devices is as follows: propose a framework in which the deep neural network is firstly trained as a function of a secret key, and hosted on a public platform.

| BACKGROUND ON ARTIFICIAL INTELLIGENCE
| BACKGROUND AND THREAT MODEL
Findings
| DISCUSSION AND CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.