Abstract

Control of robot arms is often required in engineering and can be performed by using different methods. This study examined and symmetrically compared the use of a controller, eye gaze tracker and a combination thereof in a multimodal setup for control of a robot arm. Tasks of different complexities were defined and twenty participants completed an experiment using these interaction modalities to solve the tasks. More specifically, there were three tasks: the first was to navigate a chess piece from a square to another pre-specified square; the second was the same as the first task, but required more moves to complete; and the third task was to move multiple pieces to reach a solution to a pre-defined arrangement of the pieces. Further, while gaze control has the potential to be more intuitive than a hand controller, it suffers from limitations with regard to spatial accuracy and target selection. The multimodal setup aimed to mitigate the weaknesses of the eye gaze tracker, creating a superior system without simply relying on the controller. The experiment shows that the multimodal setup improves performance over the eye gaze tracker alone ( p < 0.05 ) and was competitive with the controller only setup, although did not outperform it ( p > 0.05 ).

Highlights

  • Multimodal interaction for effective human–robot interaction (HRI) is a field with considerable potential

  • In terms of cognitive load, we observed that multimodal interaction succeeded in outperforming gaze alone

  • Our study showed that the multimodal interface was not able to outperform the controller only interface, as the controller and multimodal modalities did not differ by a statistically significant margin

Read more

Summary

Introduction

Multimodal interaction for effective human–robot interaction (HRI) is a field with considerable potential. It has the potential to offer intuitive interaction through taking advantage of several interaction devices acting in support of each other It has not reached the point of being generally deployed and a significant line of research is needed to fully understand the usefulness of multimodal techniques. More than a decade ago, Morimoto and Mimica [18] reviewed the state of gaze interaction technology with respect to practical use by the average user. They described how current systems were limited by the need for constrained head movement and constant recalibration, and gaze tracking was not yet capable of delivering a high enough quality of experience for general use, despite its apparent potential. There has been substantial research into resolving its limitations

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call