Abstract

Smart head-worn or head-mounted devices, including smart glasses and Virtual Reality (VR) headsets, are gaining popularity. Online shopping and in-app purchase from such headsets are presenting new e-commerce opportunities to the app developers. For convenience, users of these headsets may store account login, bank account and credit card details in order to perform quick in-app purchases. If the device is unattended, then an attacker, which can include insiders, can make use of the stored account and banking details to perform their own in-app purchases at the expense of the legitimate owner. To better protect the legitimate users of VR headsets (or head mounted displays in general) from such threats, in this paper, we propose to use eye movement to continuously authenticate the current wearer of the VR headset. We built a prototype device which allows us to apply visual stimuli to the wearer and to video the eye movements of the wearer at the same time. We use implicit visual stimuli (the contents of existing apps) which evoke eye movements from the headset wearer but without distracting them from their normal activities. This is so that we can continuously authenticate the wearer without them being aware of the authentication running in the background. We evaluated our proposed system experimentally with 30 subjects. Our results showed that the achievable authentication accuracy for implicit visual stimuli is comparable to that of using explicit visual stimuli. We also tested the time stability of our proposed method by collecting eye movement data on two different days that are two weeks apart. Our authentication method achieved an Equal Error Rate of 6.9% (resp. 9.7%) if data collected from the same day (resp. two weeks apart) were used for testing. In addition, we considered active impersonation attacks where attackers trying to imitate legitimate users' eye movements. We found that for a simple (resp. complex) eye tracking scene, a successful attack could be realised after on average 5.67 (13.50) attempts and our proposed authentication algorithm gave a false acceptance rate of 14.17% (3.61%). These results show that active impersonating attacks can be prevented using complex scenes and an appropriate limit on the number of authentication attempts. Lastly, we carried out a survey to study the user acceptability to our proposed implicit stimuli. We found that on a 5-point Likert scale, at least 60% of the respondents either agreed or strongly agreed that our proposed implicit stimuli were non-intrusive.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call