Abstract

The visual appearance of an object in space is an image configuration projected from a subset of connected faces of the object. It is believed that face perception and face integration play a key role in object recognition in human vision. This paper presents a novel approach for calculating viewpoint consistency for three-dimensional (3D) object recognition, which utilizes the perceptual models of face grouping and face integration. In the approach, faces are used as perceptual entities in accordance with the visual perception of shape constancy and face-pose consistency. To accommodate the perceptual knowledge of face visibility of objects, a synthetic view space (SVS) is developed. SVS is an abstractive perceptual space which partitions and synthesizes the conventional metric view sphere into a synthetic view box in which only a very limited set of synthetic views (s-views) need to be considered in estimating face-pose consistency. The s-views are structurally organized in a network, the view-connectivity net (VCN), which describes all the possible connections and constraints of the s-views in SVS. VCN provides a meaningful mechanism in pruning the search space of SVS during estimating face-pose consistency. The method has been successfully used for recognizing a class of industrial parts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.