Abstract

High-level control of mobile robots currently requires use of a personal computer and traditional graphical user interface in the vast majority of cases. Such interfaces are not natural and frequently poorly suited to interacting with robots when tasking the robot to perform very short interactions requiring only brief commands. This is a problem that is especially acute for persons with disabilities. New technologies, such as Google Glass provide a variety of high quality sensors and an unobtrusive display which allows users to have a powerful robot control interface with them at all times. In this paper we present a system which provides the basic elements required in order for a user to interact with a robot using Glass. We also informally evaluate Glass as an input device, and present several examples of applications that this interface enables.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.