Abstract

Adversarial examples are well-designed input samples that include perturbations that are imperceptible to the human eye but easily mislead the output of deep neural networks (DNNs). Existing studies synthesize adversarial examples by leveraging simple metrics to penalize perturbations that lack sufficient consideration of the human visual system (HVS), producing noticeable artifacts. To explore why these perturbations are visible,four primary factors that affect the perceptibility of human eyes are summarized in this paper. Based on this investigation, we design a multi-factor metric MulFactorLoss for measuring the perceptual loss between benign examples and adversarial examples. To test the imperceptibility of the multi-factor metric, we propose a novel black-box adversarial attack that is referred to as GreedyFool. GreedyFool applies differential evolution to evaluate the effects of perturbed pixels on the confidence of a target DNN and introduces greedy approximation to automatically generate adversarial perturbations. We conduct extensive experiments on the ImageNet and CIFRA-10 datasets and a comprehensive user study with 60 participants. The experimental results demonstrate that MulFactorLoss is a more imperceptible metric than the existing pixelwise metrics, and GreedyFool achieves a 100% success rate in a black-box manner.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call