Abstract

Deep neural network models are built upon a tremendous amount of labeled training data, whereas the data ownership must be correctly determined because the model developer may illegally misuse or steal other party’s private data for training. To determine the data ownership from a trained deep neural network model, in this chapter, we propose a deep neural network auditing scheme that allows the auditor to trace illegal data usage from a trained model. Specifically, we propose a rigorous definition of meaningful model auditing, and we point out that any model auditing method must be robust to removal attack and ambiguity attack. We provide an empirical study for existing model auditing methods, which shows that existing methods can enable data tracing under different model modification settings, but those methods fail if the model developer adopts the training data for the use case the data owner cannot manage and thus cannot provide meaningful data ownership resolution. In this chapter, we rigorously present the model auditing problem for data ownership and open a new revenue in this area of research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call