Abstract

This article proposes an online control programming algorithm for human–robot interaction systems, where robot actions are controlled by the recognition results of gestures performed by human operators based on visual images. In contrast to traditional robot control systems that use pre-defined programs to control a robot where the robot cannot change its tasks freely, this system allows the operator to train online and replan human–robot interaction tasks in real time. The proposed system is comprised of three components: an online personal feature pretraining system, a gesture recognition system, and a task replanning system for robot control. First, we collected and analyzed features extracted from images of human gestures and used those features to train the recognition program in real time. Second, a multifeature cascade classifier algorithm was applied to guarantee both the accuracy and real-time processing of our gesture recognition method. Finally, to confirm the effectiveness of our algorithm, we selected a flight robot as our test platform to conduct an online robot control experiment based on the visual gesture recognition algorithm. Through extensive experiments, the effectiveness and efficiency of our method has been confirmed.

Highlights

  • With the rapid development of robots, the requirement for smart/intelligent robots and devices is growing quickly.[1,2,3] One of the most important metrics for evaluating a smart robot is its ability to understand operator commands and follow directions, which is well known as human–robot interaction (HRI).[4,5,6] The earliest HRI system was controlled through mechanical valve and rocker.[7]

  • The recognition effects of the algorithms based on deep learning have already achieved considerable performance, it is still unsuitable for us to apply them to certain HRI fields which require high accuracy of control instructions, such as the unmanned aerial vehicles (UAVs) control system that we introduced here

  • We have proposed a new online training and programming system for controlling robots based on a visual gesture recognition algorithm and have confirmed the effectiveness of this system using a UAV platform to achieve flight control with human gesture recognition

Read more

Summary

Introduction

With the rapid development of robots, the requirement for smart/intelligent robots and devices is growing quickly.[1,2,3] One of the most important metrics for evaluating a smart robot is its ability to understand operator commands and follow directions, which is well known as human–robot interaction (HRI).[4,5,6] The earliest HRI system was controlled through mechanical valve and rocker.[7]. In contrast to interaction among humans, all of these methods require the operator to touch a device to control the robot (as shown in Figure 1), which is not in line with people’s interaction habits and needs a long period of specialized training, when operating certain types of robots, such as industrial robots and unmanned aerial vehicles (UAVs).[10]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call