Abstract

Obfuscating a dataset by adding random noises to protect the privacy of sensitive samples in the training dataset is crucial to prevent data leakage to untrusted parties when dataset sharing is essential. We conduct comprehensive experiments to investigate how the dataset obfuscation can affect the resultant model weights —in terms of the model accuracy, ℓ 2 -distance-based model distance, and level of data privacy—and discuss the potential applications with the proposed Privacy, Utility, and Distinguishability (PUD)-triangle diagram to visualize the requirement preferences. Our experiments are based on the popular MNIST and CIFAR-10 datasets under both independent and identically distributed (IID) and non-IID settings. Significant results include a tradeoff between the model accuracy and privacy level and a tradeoff between the model difference and privacy level. The results indicate broad application prospects for training outsourcing and guarding against attacks in federated learning both of which have been increasingly attractive in many areas, particularly learning in edge computing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call