Abstract

Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans’ recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers’ models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.