Abstract

The security demands in humanization should be considered as important, because artificial intelligence is developing rapidly in this area. Recent studies have shown the vulnerability of many deep learning models to adversarial examples; however, only a few studies on facial expression adversarial examples have been conducted. Thus, in this paper, we propose a novel method for generating facial expression adversarial examples using facial saliency maps and facial masking maps. Extensive numerical experiments demonstrate the outstanding performance of our method in terms of attack accuracy, structural similarity index measure score, and computational time, compared with other leading methods, such as the fast gradient sign method, projected gradient descent method, and Carlini–Wagner attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call