Abstract

More and more applications (path planning of a robot, collision avoidance methods) require 3D description of the surround world. This chapter describes a 3D projective reconstruction method and its application in an object recognition algorithm. The described system uses 2D (color or grayscale) images about the scene taken by uncalibrated cameras, tries to localize known object(s) and determine the (relative) position and orientation between them. The scene reconstruction algorithm uses simple 2D geometric entities (points, lines) produced by a low-level feature detector as the images of the 3D vertices and edges of the objects. The features are matched across views (Tel & Toth, 2000). During the projective reconstruction the 3D description is recovered. The developed system uses uncalibrated cameras, therefore only projective 3D structure can be detected defined up to a collineation. Using the Euclidean information about a known set of predefined objects stored in database and the results of the recognition algorithm, the description could be updated to a metric one. Projective reconstruction methods There are many known solutions to the projective reconstruction problem. Most of the developed methods use point features (e.g. vertices), but there are extensions to use higher order features, such as lines and curves (Kaminski & Shashua, 2004). The existing methods can be separated into three main groups. The view tensors describe the algebraic relationships amongst coordinates of features in multiple images that must be satisfied in order to represent the same spatial feature in 3D scene (Faugeras & Mourrain, 1995). These methods estimate fundamental matrix from two views (Armangue et al., 2001) or trifocal tensor from three views (Torr & Zisserman, 1997). The factorization based methods use the fact that collecting the weighted homogeneous (point) projection vectors into a large matrix (measurement matrix), the rank must be four, because it is a product of two rank four matrices. An iterative solution to solve this problem can be found in (Han & Kanade 2000). In bundle adjustment methods the reprojection errors between original image feature locations and an estimated projection of spatial feature locations are minimized. The solution for the problem can be found applying e.g. nonlinear least squares algorithm (LevenbergMarquardt). Object recognition methods The aim of object recognition methods is to recognize objects in the scene from a known set of objects, hence some a-priori information is required about the objects. These types of O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.