Abstract

The intelligent grasping expects that the manipulator has the ability to grasp objects with high degree of freedom in a wild (unstructured) environment. Due to low perception ability in handing targets and environments, most industrial robots are limited to top-down 4-DoF grasping. In this work, we propose a novel low-cost coarse to fine robotic grasping framework. First, we design a global localization based environment perception method, which enables the manipulator to roughly and automatically locate work space. Then, constrained by the above initial localization, a 3D point cloud reconstruction based 6-DoF pose estimation method is proposed for the manipulator further fine locating grasping target. Finally, our framework realizes full function of visual 6DoF robotic grasping, which includes two different visual servoing and grasp planning strategies for different objects grasping. Meanwhile, it also can integrate various state-of-arts 6DoF pose estimation algorithms to facilitate various practical grasping applications or researches. Experimental results show that our method achieves autonomous robotic grasping with high degree of freedom in an unknown environment. Especially for objects with occlusion, singular shape or small scale, our method can still maintain robust grasping.

Highlights

  • The general visual grasping solutions need to set up the observation pose in advance and establish the grasping model of the manipulator as the prior knowledge, including object geometry, mechanical dynamics and etc, grasps with fixed motions in a pre-set workspace [1]–[3]

  • In an unstructured environment without relative position of workspace or target pose, most of existing robots can neither active search of the target nor make dynamic response to the end effector for spatial grasping. They depend on artificial settings and can only replay the pre-taught motions in fixed space, which makes the manipulator unable to adapt to intelligent grasping applications

  • We introduce a novel low-cost coarse to fine manipulator framework, which realizes automatic workspace localization and dynamic objects 6-DoF grasping in unstructured environment

Read more

Summary

Introduction

The general visual grasping solutions (eye-to-hand or eye-in-hand) need to set up the observation pose in advance and establish the grasping model of the manipulator as the prior knowledge, including object geometry, mechanical dynamics and etc, grasps with fixed motions in a pre-set (structured) workspace [1]–[3]. The latest data-driven deep learning methods collect a huge number of effective grasping training data with efficient learning methods and special sensors to achieve high freedom (6-DoF: x, y, z, yaw, pitch, roll) grasping [4]–[6] in an unstructured environment They still need to manually guide the manipulator to observe the targets. Diverse workspace requires different training data, which makes the robotic grasping with low generalization ability

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call