Abstract

Membership Inference Attacks (MIAs) can be conducted based on specific settings/assumptions and experience different limitations. In this paper, first, we provide a systematization of knowledge for all representative MIAs found in the literature. Second, we empirically evaluate and compare the MIA success rates achieved on Machine Learning (ML) models trained with some of the most common generalization techniques. Third, we examine the contribution of potential data leaks to successful MIAs. Fourth, we examine if the depth of Artificial Neural Networks (ANNs) affects MIA success rate and to what extent. For the experimental analysis, we focus solely on well-generalizable target models (various architectures trained on multiple datasets), having only black-box access to them. Our results suggest the following: (a) MIAs on well-generalizable targets suffer from significant limitations which undermine their practicality, (b) common generalization techniques result in ML models which are comparably robust against MIAs, (c) data leaks, although effective for overfitted models, do not facilitate MIAs in case of well-generalizable targets, (d) deep ANN architectures are not more vulnerable to MIAs compared to shallower ones or the opposite, and (e) well-generalizable models can be robust against MIAs even when not achieving state-of-the-art performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.