Abstract

The continuous advances in the technology of Convolutional Neural Network (CNN) and Deep Learning have been applied to facilitate various tasks of human life. However, security risks of the users’ information and privacy have been increasing rapidly due to the models’ vulnerabilities. We have developed a novel method of adversarial attack that can conceal its intent from human intuition through the use of a modified QR code. The modified QR code can be consistently scanned with a reader while retaining adversarial efficacy against image classification models. The QR adversarial patch was created and embedded into an input image to generate adversarial examples, which were trained against CNN image classification models. Experiments were performed to investigate the trade-off in different patch shapes and find the patch’s optimal balance of scannability and adversarial efficacy. Furthermore, we have investigated whether particular classes of images are more resistant or vulnerable to the adversarial QR attack, and we also investigated the generality of the adversarial attack across different image classification models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.