Abstract
In recent years, machine learning as a service (MLaaS) has brought considerable convenience to our daily lives. However, these services raise the issue of leaking users’ sensitive attributes, such as race, when provided through the cloud. The present work overcomes this issue by proposing an innovative privacy-preserving approach called privacy-preserving class overlap (PPCO), which incorporates both a Wasserstein generative adversarial network and the idea of class overlapping to obfuscate data for better resilience against the leakage of attribute-inference attacks(i.e., malicious inference on users’ sensitive attributes). Experiments show that the proposed method can be employed to enhance current state-of-the-art works and achieve superior privacy–utility trade-off. Furthermore, the proposed method is shown to be less susceptible to the influence of imbalanced classes in training data. Finally, we provide a theoretical analysis of the performance of our proposed method to give a flavour of the gap between theoretical and empirical performances.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Information Forensics and Security
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.