Abstract

In this study we present an autonomous grasping system that uses a vision-guided hand–eye coordination policy with closed-loop vision-based control to ensure a sufficient task success rate while maintaining acceptable manipulation precision. When facing a diversity of tasks with complex environments, an autonomous robot should use the concept of task precision, including the accuracy of perception and precision of manipulation, as opposed to just the grasping success rate typically used in previous works. Task precision combines the advantages of grasping behaviors observed in humans and a grasping method applied in existing works. A visual servoing approach and a subtask decomposition strategy are proposed here to obtain the desired level of task precision. Our system performs satisfactorily on a tangram puzzle task. The experiments highlight the accuracy of perception, precision of manipulation, and robustness of the system. Moreover, the system is of great significance for improving the adaptability and flexibility of autonomous robots.

Highlights

  • Introduction and Its Application in TangramIn this study, we outline a general framework of robot control to solve a class of intelligent grasping tasks

  • The tangram blocks are placed in any position and pose in a complex background

  • This study presents a vision-guided hand–eye coordination system for robotic graspaverage value, and 99% of the trials are concentrated within the range of 0.5–2.5 mm

Read more

Summary

Introduction

Introduction and Its Application in TangramIn this study, we outline a general framework of robot control to solve a class of intelligent grasping tasks. The robot needs to reliably recognize the target object, accurately locate the object, and have a closed-loop control process. These three requirements are difficult for an intelligent robot to satisfy because they present challenging computer vision task problems, such as invariant recognition under a complex background, optical measurement, and adaptive control with visual feedback. In many existing works on robot grasping, the robot first perceives the scene and recognizes appropriate grasp locations, plans a path to those locations, and follows the planned path to those locations [1,2] These tasks are respectively called perception, planning, and action. The grasping behaviors observed in humans are dynamical processes that interleave sensing and manipulation at every module [3]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call