Abstract

Visual kinship verification is one of the key research problems in computer vision with significant progress made in the past decade. Meanwhile, the harnessing of visual kinship models may lead to personal privacy leaking and raise people's concerns, especially the heavy social media users. One promising countermeasure is to overlay additional noise on images through the adversarial attack on kinship verification models to protect personal privacy. Motivated by the recent success of Transformer models in visual tasks, we propose a novel Transformer-based adversarial attack method named “Kinship-advTransGAN” towards the attack on kinship verification model. Essentially, Kinship-advTransGAN replaces the well-established CNN structure in conventional advGAN by TransGAN to generate adversarial samples with more sparse noise but comparable successful attacking rate. We verify our proposed method on a few open benchmarks, including FIW datasets and Kaggle Kinship Verification Challenges. Among these challenging tasks, we achieved surprisingly good performance: over 90% successful attacking rate on FIW datasets and 76.86% successful rate of attacks on Kaggle Kinship Verification Challenge, but with less visually perceivable noise on face images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call