Abstract

Gaze-based interfaces are especially useful for people with disabilities involving the upper limbs or hands. Typically, users select from a number of options (e.g. letters or commands) displayed on a screen by gazing at the desired option. However, in some applications, e.g. gaze-based driving, it may be dangerous to direct gaze away from the environment towards a separate display. In addition, a purely gaze based interface can present a high cognitive load to users, as gaze is not normally used for selection and/or control, but rather for other purposes, such as information gathering. To address these issues, this paper presents a cost-effective multi-modal system for gaze based driving which combines appearance-based gaze estimates derived from webcam images with push button inputs that trigger command execution. This system uses an intuitive "direct interface", where users determine the direction of motion by gazing in the corresponding direction in the environment. We have implemented the system for both wheelchair control and robotic teleoperation. The use of our system should provide substantial benefits for patients with severe motor disabilities, such as ALS, by providing them with a more natural and affordable method of wheelchair control. We compare the performance of our system to the more conventional and common "indirect" system where gaze is used to select commands from a separate display, showing that our system enables faster and more efficient navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call