Abstract
With the rapid development of digitalization, information security becomes more and more important. User authentication is a significant line of defense. During the past few years, many research works concentrating on user authentication have been published. Traditional authentication techniques (password, ID card-based authentication, or other biometrics like fingerprint or iris) basically fall into the category of one-time authentication. The disadvantage of these methods is that once an unknown user completes an authentication through forgery, the system would continue to operate without any resistance, thus putting the entire system at risk. Recently, due to the prevalence of deep learning techniques, some researchers have applied learning-based models to user authentication. However, almost all existing methods only focus on keystroke dynamics while ignoring the ”text” entered by users during keystrokes. In this paper, we propose contents and keystroke dual attention networks with pre-trained models for continuous authentication. To the best of our knowledge, our paper is the first to address user-inputted ”text” during keystrokes as an important asset beyond traditional characteristic keystroke dynamics. Specifically, we use the well-known pre-trained RoBERTa model to capture textual features. Then, we pass textual features and conventional features through our proposed dual attention networks. Our networks fuse these features and acquire final representations. Experiments show that the CKDAN model achieves state-of-the-art performance on two datasets, Clarkson II keystroke dataset and Buffalo dataset, outperforming all baseline methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.