Abstract

In the era of Big Data, deep learning techniques provide intelligent solutions for various problems in real-life scenarios. However, deep neural networks depend on large-scale datasets including sensitive data, which causes the potential risk of privacy leakage. In addition, various constantly evolving attack methods are also threatening the data security in deep learning models. Protecting data privacy effectively at a lower cost has become an urgent challenge. This paper proposes an Adaptive Feature Relevance Region Segmentation (AFRRS) mechanism to provide differential privacy preservation. The core idea is to divide the input features into different regions with different relevance according to the relevance between input features and the model output. Less noise is intentionally injected into the region with stronger relevance, and more noise is injected into the regions with weaker relevance. Furthermore, we perturb loss functions by injecting noise into the polynomial coefficients of the expansion of the objective function to protect the privacy of data labels. Theoretical analysis and experiments have shown that the proposed AFRRS mechanism can not only provide strong privacy preservation for the deep learning model, but also maintain the good utility of the model under a given moderate privacy budget compared with existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call