Abstract

In the field of data mining, protecting sensitive data from being leaked is part of the focuses of current research. As a strict and provable definition of privacy model, differential privacy provides an excellent solution to the problem of privacy leakage. Numerous methods have been suggested to enforce differential privacy in various data mining tasks, such as regression analysis. However, existing solutions for regression analysis is less than satisfactory since the amount of noise added is excessive. What's worse, the adversary can launch model inversion attacks to infer sensitive information with the published regression model. Motivated by this, we propose a differential privacy budget allocation model. We optimize the regression model by adjusting the privacy budget allocation within the objective function. Extensive evaluation results show the superiority of the proposed model in terms of noise reduction, model inversion attack proof, and the trade-off between privacy protection and data utility.

Highlights

  • Regression analysis [1] is widely used in the fields of statistical analysis and data mining

  • As a strict and provable definition of privacy model, differential privacy provides a method for quantitative evaluation of privacy protection and develops a new solution to the problem of privacy leakage in regression analysis

  • 1) we propose Differentiated Privacy Budget Allocation (DPBA), a new solution for differential privacy preserving in regression analysis

Read more

Summary

INTRODUCTION

Regression analysis [1] is widely used in the fields of statistical analysis and data mining. As a strict and provable definition of privacy model, differential privacy provides a method for quantitative evaluation of privacy protection and develops a new solution to the problem of privacy leakage in regression analysis. There are two main methods to achieve differential privacy in regression analysis algorithm. The function mechanism [15], which is based on objective perturbation, performs better than other approaches in the field of regression analysis with differential privacy. Function mechanism never considers the difference in sensitivity of each part of the objective function It may result in uneven distribution of the internal privacy budget. Function mechanism can only protect regression model against model inversion attack when the privacy budget is small enough.

RELATED WORK
SECURITY THREATS IN REGRESSION ANALYSIS
REGRESSION MODEL
GLOBAL SENSITIVITY
NOISE MECHANISM
MODEL INVERSION ATTACK
ALGORITHM OVERVIEW
SYSTEM ANALYSIS
Findings
SUMMARY AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.