Abstract

AbstractMembership inference attacks can infer whether a data sample exists in the training set of the target model based on limited adversary knowledge, which results in serious leakage of privacy. A large number of recent studies have shown that model overfitting is one of the main reasons why membership inference attacks can be executed successfully. Therefore, some classic methods to solve model overfitting are used to defend against membership inference attacks, such as dropout, spatial dropout, and differential privacy. However, it is difficult for these defense methods to achieve an available trade-off in defense success rate and model utility. In this paper, we focus on the impact of model training loss on model overfitting, and we design a Squeeze-Loss strategy to dynamically find the training loss that achieves the best balance between model utility and privacy. Extensive experimental results show that our strategy can limit the success rate of membership inference attacks to the level of random guesses with almost no loss of model utility, which always outperforms other defense methods.KeywordsMembership inference attackSqueeze training lossDeep learningData privacyDefense strategy

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.