Abstract

In this paper, a manipulation planning method for object re-orientation based on semantic segmentation keypoint detection is proposed for robot manipulator which is able to detect and re-orientate the randomly placed objects to a specified position and pose. There are two main parts: (1) 3D keypoint detection system; and (2) manipulation planning system for object re-orientation. In the 3D keypoint detection system, an RGB-D camera is used to obtain the information of the environment and can generate 3D keypoints of the target object as inputs to represent its corresponding position and pose. This process simplifies the 3D model representation so that the manipulation planning for object re-orientation can be executed in a category-level manner by adding various training data of the object in the training phase. In addition, 3D suction points in both the object’s current and expected poses are also generated as the inputs of the next operation stage. During the next stage, Mask Region-Convolutional Neural Network (Mask R-CNN) algorithm is used for preliminary object detection and object image. The highest confidence index image is selected as the input of the semantic segmentation system in order to classify each pixel in the picture for the corresponding pack unit of the object. In addition, after using a convolutional neural network for semantic segmentation, the Conditional Random Fields (CRFs) method is used to perform several iterations to obtain a more accurate result of object recognition. When the target object is segmented into the pack units of image process, the center position of each pack unit can be obtained. Then, a normal vector of each pack unit’s center points is generated by the depth image information and pose of the object, which can be obtained by connecting the center points of each pack unit. In the manipulation planning system for object re-orientation, the pose of the object and the normal vector of each pack unit are first converted into the working coordinate system of the robot manipulator. Then, according to the current and expected pose of the object, the spherical linear interpolation (Slerp) algorithm is used to generate a series of movements in the workspace for object re-orientation on the robot manipulator. In addition, the pose of the object is adjusted on the z-axis of the object’s geodetic coordinate system based on the image features on the surface of the object, so that the pose of the placed object can approach the desired pose. Finally, a robot manipulator and a vacuum suction cup made by the laboratory are used to verify that the proposed system can indeed complete the planned task of object re-orientation.

Highlights

  • With the development of intelligent automation and artificial intelligence technologies such as deep learning, applications and development of intelligent robots have gradually attracted attention in the academic and industrial fields

  • An object re-orientation planning method based on 3D keypoint detection is proposed for the robot manipulator so that it can re-orientate an object from an arbitrary pose to a specified position and pose

  • There are three main contributions of this research: (i) In the object pose estimation system, the CNN-based object detection algorithm is used to recognize the position of each object by separating them into some pack units, and the depth image is added to estimate the object’s pose in the environment

Read more

Summary

Introduction

With the development of intelligent automation and artificial intelligence technologies such as deep learning, applications and development of intelligent robots have gradually attracted attention in the academic and industrial fields. The pick-and-place tasks in robotics have been extensively developed and researched in industrial manufacturing and in academia. In recent years, according to the rise of deep neural networks, in addition to classify and logistic regression [1,2], deep learning has been widely used in the pick-and-place tasks for robot manipulators. It can be roughly divided into four topics: (i) object detection; (ii) object pose estimation;

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call