Abstract

The widespread and mature application of deep learning technology on human behavior analysis leads to the possibility that the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) may not be able to accurately determine whether or not the user is human, which makes the distributed denial-of-service (DDoS) attacks launched by deep learning-based malicious automatic computer programs become possible. The CAPTCHA generated by adversarial examples can defend against the automatic identification by the deep learning-based method, however, if the deployed CAPTCHAs are all adversarial examples solely, the adversary has the chance to collect sufficient adversarial examples, which also causes security issues. In this paper, a user behavior-based random distribution scheme for adversarial example generated CAPTCHA is proposed to tackle the above sole distribution problem. Specifically, we generated two kinds of CAPTCHAs, one is normal and the other is generated by the fast gradient signed method (FGSM), i.e., the adversarial examples. Meanwhile, user behaviors are analyzed and associated with a reasonable probability model. According to the probability selection, the normal CAPTCHAs or strong CAPTCHAs will be delivered to the users. The experimental results illustrate that our scheme has an excellent ability to distinguish computers from humans. Thus, it can protect the CAPTCHA systems from DDoS attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call