Abstract

In this paper, a novel global point cloud descriptor is proposed for reliable object recognition and pose estimation, which can be effectively applied to robot grasping operation. The viewpoint feature histogram (VFH) is widely used in three-dimensional (3D) object recognition and pose estimation in real scene obtained by depth sensor because of its recognition performance and computational efficiency. However, when the object has a mirrored structure, it is often difficult to distinguish the mirrored poses relative to the viewpoint using VFH. In order to solve this difficulty, this study presents an improved feature descriptor named orthogonal viewpoint feature histogram (OVFH), which contains two components: a surface shape component and an improved viewpoint direction component. The improved viewpoint component is calculated by the orthogonal vector of the viewpoint direction, which is obtained based on the reference frame estimated for the entire point cloud. The evaluation of OVFH using a publicly available data set indicates that it enhances the ability to distinguish between mirrored poses while ensuring object recognition performance. The proposed method uses OVFH to recognize and register objects in the database and obtains precise poses by using the iterative closest point (ICP) algorithm. The experimental results show that the proposed approach can be effectively applied to guide the robot to grasp objects with mirrored poses.

Highlights

  • Three-dimensional (3D) machine vision is a key technology in the field of robotics

  • In order to prove that the proposed descriptor orthogonal viewpoint feature histogram (OVFH) is available dataset dataset [21] was used for test improved in pose retrieval compared with viewpoint feature histogram (VFH), a publicly available improved in pose retrieval compared with VFH, a publicly available dataset [21] was used for test setset hashas point clouds which are captured from five experiments

  • In order to correctly distinguish the mirrored poses relative to the viewpoint, an effective global feature descriptor OVFH is proposed in this paper, which was successfully applied to object recognition feature descriptor OVFH is proposed in this paper, which was successfully applied to object and pose estimation

Read more

Summary

Introduction

Three-dimensional (3D) machine vision is a key technology in the field of robotics. the rise of 3D vision technology [1,2] is later than two-dimensional (2D) vision technology [3,4], it presents some advantages that 2D vision does not have when performing some complex visual tasks in 3D space. Low-cost real-time 3D sensors such as Microsoft Kinect and Asus Xtion have become low-cost consumer devices accessible to ordinary users These sensors can be used to generate color 3D point clouds on the surface of a given scene in real time, which promotes the research of 3D object recognition and registration. Global feature descriptors describe the geometry, appearance or both of the object point cloud, which is more advantageous in object recognition and pose estimation [13]. The viewpoint feature histogram (VFH) [8] is a global feature descriptor which can be used for object recognition and pose estimation in a 6DOF robot grasping operation. The main contributions of this paper are (1) a novel and efficient global feature descriptor for object recognition and pose estimation; (2) an evaluation of the object recognition rate and the ability to distinguish the mirrored poses; (3) a visual guidance method for a robotic grasping system.

Improved Global Feature Descriptor
Global Feature Descriptor VFH
Improved Global Feature Descriptor OVFH
Visual Guidance Algorithm for the Robotic Grasping System
Creation of the Database
Object
Experimental Results on the Data Set
Robotic Grasping Experiment
18. Figurescores
For objects
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call