Abstract

Recent years have witnessed the rapid boom of mobile devices interweaving with changes the epidemic has made to people's lives. Though a tremendous amount of novel human-device interaction techniques have been put forward to facilitate various audiences and scenarios, limitations and inconveniences still occur to people having difficulty speaking or using their fingers/hands/arms or wearing masks/glasses/gloves. To fill the gap of such interaction contexts beyond using hands, voice, face, or mouth, in this work, we take the first step to propose a novel Human-Computer Interaction (HCI) system, TwinkleTwinkle, which senses and recognizes eye blink patterns in a contact-free and training-free manner leveraging ultrasound signals on commercial devices. TwinkleTwinkle first applies a phase difference based approach to depicting candidate eye blink motion profiles without removing any noises, followed by modeling intrinsic characteristics of blink motions through adaptive constraints to separate tiny patterns from interferences in conditions where blink habits and involuntary movements vary between individuals. We propose a vote-based approach to get final patterns designed to map with number combinations either self-defined or based on carriers like ASCII code and Morse code to make interaction seamlessly embedded with normal and well-known language systems. We implement TwinkleTwinkle on smartphones with all methods realized in the time domain and conduct extensive evaluations in various settings. Results show that TwinkleTwinkle achieves about 91% accuracy in recognizing 23 blink patterns among different people.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call