Abstract

The emerging of machine learning has massively promoted the abilities of computational sustainability in natural resource management and allocation. Many Internet giants such as Google, Amazon, and Microsoft now provide Machine Learning as a Service (MLaaS) to meet the increasing demand for machine learning services. However, the prediction results of training data and testing data with the same machine learning model in MLaaS have remarkable differences, and thus the attackers can leverage machine learning techniques to launch the so-called membership inference attacks, i.e., to infer whether a record is in the training data or not. In this paper, we propose MIASec that can guarantee the data indistinguishability of the training data and thereby has the ability to defend against membership inference attacks in MLaaS. The key idea of MIASec is to narrow the dynamic ranges of vital features in the training data, such that the training data, the testing data, and even the synthetic data have almost semblable prediction results by the same machine learning model. With elaborated design on modifying the values of vital features in the training data, MIASec can thus reduce the differences between the model's outcomes of training data and testing data, thereby protecting the training data in effect while keeping the model's accuracy stable. We empirically evaluate MIASec on machine learning models trained by off-line neural networks and on-line MLaaS. Using realistic data and classification tasks, our experiment results show that MIASec can defend the membership inference attacks effectively. In particular, MIASec can reduce the precision and recall of attacks respectively by 11.7 and 15.4 percent in average, and by 18.6 and 21.8 percent at best.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call