Abstract

Hitherto, most of the existing machine learning models are known to implicitly memorize many details of training datasets during training and inadvertently reveal privacy during model prediction. It is paramount to improve the non -private machine learning methods for non experts on privacy especially for those who majored in information-critical domains. Throughout this paper, we give a comprehensive review of privacy preserving in machine learning under the unified framework of differential privacy. We provide an intuitive handle for the operator to gracefully balance between utility and privacy, through which more users can benefit from machine learning models built on their sensitive data. And fi nally, we discuss major challenges and promising research directions in the fi eld of differentially private machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call