Abstract

Machine learning as a service (MLaaS) brings many benefits to people's daily life. However, the service mode of MLaaS will increase the risk of users' privacy leakage. Existing works focusing on privacy-preserving based on encryption, differential privacy, and distributed framework require high computing resources or cannot be applied in MLaaS. In this paper, we propose feature dilution (FD), a noise-based desensitization algorithm to remove sensitive information in raw data. In particular, FD continuously adds raw data features to the random noise until it meets the minimum amount for an effective query, and we call this noise weak-feature noise (WFN). By fine-tuning the MLaaS architecture, we have realized that users can utilize WFN to get normal services without exposing their local private data. Meanwhile, noise addition technology is introduced by us to reduce the risk of privacy leakage caused by “weak features”. Extensive experiments have demonstrated that users can use FD to obtain effective services without exposing their private data. Finally, we conducted practical tests on weak-feature noises and found that these noises are difficult to use by malicious service providers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call