Abstract

An adversarial example is the weakness of the machine learning (ML), and it can be utilized as the tool to defend against the inference attacks launched by ML classifiers. Jia et al. proposed MemGuard, which applied the idea of adversarial example to defend against membership inference attack. In a membership inference attack, the attacker attempts to infer whether a particular sample is in the training set of the target classifier, which may be a software or a service whose model parameters are unknown to the attacker. MemGuard does not tamper the training process of the target classifier, meanwhile achieving better tradeoff between the privacy and utility loss. However, many defenses of the adversarial example have been proposed, which decreases the effectiveness of the adversarial example. Inspired by the defenses of the adversarial example, we try to attack the MemGuard. As a result, we utilize the nonlocal-means method to do the attack by using the inherent relationship between neighbor entries to remove the added noise. Due to the low dimensionality of the confidence score vector, our attack avoids the high computation overhead of the nonlocal-means method. Besides, we use practical datasets to test our attack, and the experimental results demonstrate the effectiveness of our attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call