Abstract

Deep learning techniques were widely adopted in various scenarios as a service. However, they are found naturally exposed to adversarial attacks. Such imperceptible-perturbation-based attacks can cause severe damage in nowaday authentication systems that adopt DNNs as the core, such as fingerprint liveness detection systems, face recognition systems, etc. This paper avoids improving the model's robustness and realizes the defense against adversarial attacks based on denoising and reconstruction. Our proposed method can be viewed as a two-step defense framework. The first step denoises the input adversarial example, then reconstructing the sample to close to the original clean image and help the target model output the original label. The proposed method is evaluated using six kinds of state-of-art adversarial attacks, including the adaptive attacks, which are known as the strongest attacks.We also specifically focus on demonstrating the effectiveness of our proposed work in Finance Authentication systems as a real-life case study. Experimental results reveal that our method is more robust than the previous super-resolution-only defense in respect of attaining a higher averaging accuracy over clean and distorted samples. To the best of our knowledge, it's the first work that reveals a comprehensive defense framework against adversarial attacks over Finance Authentication systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.