Abstract

Machine learning (ML) has been widely used in various scenes, such as face recognition, automatic driving and image recognition and other fields. However, in these applications, attackers usually exploit various vulnerabilities in machine learning models to mine private information, including model training information and private data information. The attacks against machine learning training data mainly include membership inference attack (MIA). The attacker of membership inference attack will judge whether the data sample is in the training data set of the target model given the data sample and the target model. Most of the existing member inference attacks are based on confidence attacks. Researchers found that if the confidence returned is hidden, they can effectively defend against these attacks. Therefore, the latest research proposed label-only attack, but this attack requires hundreds of queries for a sample, and abnormal query accesses are easy to be found by the target model. In this paper, we propose a member inference attack based on the transferability of adversarial samples. The adversarial samples generated under different models of the same task have certain mobility characteristics, so as to use the shadow model to generate adversarial samples for attack and reduce the attack cost. The evaluation shows that under the restriction of fewer queries, only the label output will still reveal private information, and the performance of the member inference attack based on the transferability of adversarial samples is better than that based on confidence in most cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call