Abstract

Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using synthetic scenes to make up for the point pair feature algorithm. An accurate and fast pose verification method is introduced to select result poses from thousands of poses. Our algorithm is evaluated against a large number of scenes and proved to be more accurate than algorithms using both color information and depth information.

Highlights

  • Vision-based object recognition and pose estimation has been widely researched because of its importance in robotics applications

  • In order to ensure the objects can be recognized under these circumstances, we focus on developing an algorithm based on the point pair feature approach [10]

  • This paper proposed an object recognition and pose estimation algorithm using depth information

Read more

Summary

Introduction

Vision-based object recognition and pose estimation has been widely researched because of its importance in robotics applications. Local descriptors are matched to the stored models online, using a novel combination of the RANSAC and Mean Shift algorithms to register multiple instances of each object This method can only be used for the objects with texture in household environments. Histogram (CVFH) [8] and Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram (OUR-CVFH) [12] were proposed These methods could detect multiple objects with only depth. Based on LINEMOD, Hinterstoisse et al [21] generated model templates with synthetic rendering of the object and performed pose verification with color and depth information. A point pair feature based pose estimation algorithm using depth information is proposed. To improve the efficiency of point pair feature approach, a boundary-based preprocessing method is proposed to remove background points and points belonging to foreground objects that are larger than the target.

Method
Scene Preprocessing
Point Pair Feature
Partial Model Point Pair Feature
Pose Verification
Depth Verification
Inverse Verification
Boundary Verification
Visible Points Verification
Select Result Pose
Experiment
ACCV Dataset
Tejani Dataset
Computation Time
Contribution of Each Step
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.