Abstract

This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of mm for head-gaze and mm for eye-gaze at a distance of 0.3–1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion.

Highlights

  • Direct human robot collaboration demands robust interfaces to interact with or control a robotic system in a human safe manner

  • Recent approaches focus on the use of head motion or eye-gaze tracking data to allow for direct robot control since both modalities are naturally correlated with direct interaction intention and enable accurate control mechanisms [7]

  • Orientation and position estimation accuracy is calculated based on the head motion from all 30 trials

Read more

Summary

Introduction

Direct human robot collaboration demands robust interfaces to interact with or control a robotic system in a human safe manner. The question of how an interface could be designed to effectively and intuitively allow for hands-free robot control has drawn significant research attention in the last decade [5,6]. Gaze based control signals can accelerate and simplify human robot collaboration, especially when it comes to object targeting in pick and place tasks which is essential in the context of human robot collaboration [2,8]. Affordability and mobility are key factors for eye or head-gaze based interfaces to enable intuitive human robot collaboration and to transfer research and development to further applications and end users

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call