Abstract
Visual kinship verification is one of the key research problems in computer vision with significant progress made in the past decade. Meanwhile, the harnessing of visual kinship models may lead to personal privacy leaking and raise people's concerns, especially the heavy social media users. One promising countermeasure is to overlay additional noise on images through the adversarial attack on kinship verification models to protect personal privacy. Motivated by the recent success of Transformer models in visual tasks, we propose a novel Transformer-based adversarial attack method named “Kinship-advTransGAN” towards the attack on kinship verification model. Essentially, Kinship-advTransGAN replaces the well-established CNN structure in conventional advGAN by TransGAN to generate adversarial samples with more sparse noise but comparable successful attacking rate. We verify our proposed method on a few open benchmarks, including FIW datasets and Kaggle Kinship Verification Challenges. Among these challenging tasks, we achieved surprisingly good performance: over 90% successful attacking rate on FIW datasets and 76.86% successful rate of attacks on Kaggle Kinship Verification Challenge, but with less visually perceivable noise on face images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.