Abstract
We advance new active computer vision algorithms based on the Feature space Trajectory (FST) representations of objects and a neural network processor for computation of distances in global feature space. Our algorithms classify rigid objects and estimate their pose from intensity images. They also indicate how to automatically reposition the sensor if the class or pose of an object is ambiguous from a given viewpoint and they incorporate data from multiple object views in the final object classification. An FST in a global eigenfeature space is used to represent 3D distorted views of an object. Assuming that an observed feature vector consists of Gaussian noise added to a point on the FST, we derive a probability density function for the observation conditioned on the class and pose of the object. Bayesian estimation and hypothesis testing theory are then used to derive approximations to the maximum a posterioriprobability pose estimate and the minimum probability of error classifier. Confidence measures for the class and pose estimates, derived using Bayes theory, determine when additional observations are required, as well as where the sensor should be positioned to provide the most useful information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.