Abstract

The emergence of assistive robots presents the possibility of restoring vital degrees of independence to the elderly and impaired in activities of daily living (ADL). However, one of the main challenges is the lack of a means for effective and intuitive human–robot interaction (HRI). While humans can express their intentions in different ways (e.g., physical gestures or motions, or speech or language patterns), gaze-based implicit intention communication is still underdeveloped. In this study, a novel nonverbal implicit communication framework based on eye gaze is introduced for HRI. In this framework, a user's eye-gaze movements are proactively tracked and analyzed to infer the user's intention in ADL. Then, the inferred intention can be used to command assistive robots for proper service. The advantage of this framework is that gaze-based communication can be handled by most of the people, as it requires very little effort, and most of the elderly and impaired retain visual capability. This framework is expected to simplify HRI, consequently enhancing the adoption of assistive technologies and improving users’ independence in daily living. The testing results of this framework confirmed that a human's subtle gaze cues on visualized objects could be effectively used for human-intention communication. Results also demonstrated that the gaze-based intention communication is easy to learn and use. In this study, the relationship of visual behaviors with the mental process during human intention expression was studied for the first time to build a fundamental understanding of this process. These findings are expected to guide further design of accurate intention inference algorithms and intuitive HRI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call