Abstract

In this work, we show how our open source accessibility software, the FaceSwitch, can help motor-impaired subjects to efficiently interact with a computer hands-free. The FaceSwitch enhances gaze interaction with video-based face gestures interaction. The emerging multimodal system allows for interaction with a user interface by means of gaze pointing for target selection and facial gestures for target-specific action commands. The FaceSwitch maps facial gestures to specific mouse or keyboard events such as: left mouse click, right mouse click, or page scroll down. Hence, facial gestures serve the purpose of mechanical switches. With this multimodal interaction paradigm, the user gazes at the object in the user interface with which it wants to interact and then triggers a target-specific action by performing a face gesture. Through a rigorous user study, we have obtained quantitative evidence that suggests our proposed interaction paradigm improves the performance of traditional accessibility options, such as gaze-only interaction or gaze with a single mechanical switch interaction while coming close in terms of speed and accuracy with traditional mouse-based interaction. We make the FaceSwitch software freely available to the community so the output of our research can help the target audience.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call