Abstract

There are a variety of objects, random postures and multiple objects stacked in a disorganized manner in unstructured home applications, which leads to the object grasping posture estimation and grasping planning based on machine vision become very complicated. This paper proposes a method of cluttering pose detection based on convolutional neural network with multiple self-powered sensors information. Firstly, a search strategy for candidate grasping poses based on the 3D point cloud is proposed, and the single-channel grasping image dataset representing grasping posture in this paper is established by using Bigbird dataset. Secondly, the ResNet is constructed to rank and filter the single channel captured images representing the captured bit pose. It is also compared with three mainstream classification networks, Inception V2, VGG-A and LetNet, and the perception analysis function and the execution planning function are developed under ROS. The effective grasping of the manipulator in the scene of scattered piles is realized based on the detection results of grasping position and combined with the information of multiple self-powered sensors, and the ResNet network is compared with other three classification networks. In a scattered and stacked environment of objects, the results of experiment show that the method based on ResNet network is superior to the other three networks, and the average success rate of grasping pose detection based on ResNet, InceptionV2, VGGA and LetNet networks is 90.67%, 82.67%, 86.67% and 87.33% respectively, which verifies the effectiveness and superiority of the deep learn-based grasping pose detection model proposed in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call