Abstract

In this article, a novel, efficient grasp synthesis method is introduced that can be used for closed-loop robotic grasping. Using only a single monocular camera, the proposed approach can detect contour information from an image in real time and then determine the precise position of an object to be grasped by matching its contour with a given template. This approach is much lighter than the currently prevailing methods, especially vision-based deep-learning techniques, in that it requires no prior training. With the use of the state-of-the-art techniques of edge detection, superpixel segmentation, and shape matching, our visual servoing method does not rely on accurate camera calibration or position control and is able to adapt to dynamic environments. Experiments show that the approach provides high levels of compliance, performance, and robustness under diverse experiment environments.

Highlights

  • An aging population and rising labor costs are acute challenges facing society, resulting in a high demand for indoor service robots

  • Deep-learning techniques have emerged as the most preferred methods among the approaches in the field of grasp synthesis.[7,8]. These methods use various versions of convolutional neural networks (CNNs) to identify the objects to be grasped,[9,10] which means they demand a large amount of data as well as time for training and testing; the approaches require an expensive hardware environment

  • Deep-learning methods require a lot of training and test data and computing time as well as an extremely extensive hardware environment

Read more

Summary

Introduction

An aging population and rising labor costs are acute challenges facing society, resulting in a high demand for indoor service robots. Since the object is identified by shape features, this method does not require a large number of training samples, which means that it saves cumbersome manual labeling work and greatly lowers the requirements for a computing hardware This method combines the object recognition module with the robot control module to form a hand-eye coordination mechanism with feedback, that is, a closed-loop control process. This process means it is not necessary to calculate the exact absolute coordinate values, only the relative positional offset between the object and the gripper, which greatly simplifies the calculation of the conversion between multiple coordinate systems. The human visual system recognizes objects mainly based on the contour information,[11,12] which means our method utilizes the results of cognitive research

Related work
Results of object recognition
Experiments
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call