Abstract

A new algorithm is described for estimating the change in orientation and position of an object in two sets of images. The images within each set are calibrated but the exact geometrical relationship between the two sets of views is unknown. Variations in the two-dimensional silhouette of a fixed and rigid three-dimensional object, as the viewpoint is changed, are analysed to estimate the relative position and orientation of the object in the two different image sets. The main advantage of this method is that no explicit point, or line, correspondences need be identified; the only requirement is for reliable segmentation of the object from the background. It is shown that an incorrect estimate of the relative object pose gives rise to silhouettes which are inconsistent in that they violate a certain geometrical constraint. The extent to which the images are consistent is quantified using a certain consistency metric. Standard minimisation techniques are then used to obtain accurate estimates for both rotational and translational parameters. Results are presented for the registration of synthetic images, with added noise, and for the registration of real image data. For small test objects the relative orientation estimates are consistent to within ±6 degrees and the relative translation estimates to ±1.8 mm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.