Abstract

Object recognition is imperative in industry automation since it empowers robots with the perceptual capability of understanding the three-dimensional(3-D) environment by means of sensory devices. Considering object recognition as a mapping between object models and a partial description of an object, this paper introduces a three-phase filtering method that eliminates candidate models when their differences with the object show up. Throughout the process, a view-insensitive modeling method, namely localized surface parameters, is employed. Surface matching is carried out in the first phase to match models with the object by comparing their localized surface descriptions. A model is a candidate of the object only if every object surface matches locally with at least one of the model surfaces. Since the topological relationship between surfaces specifies the global shape of the object and models, it is then checked in the next phase with local coordinate systems to make sure that a candidate model has the identical structure as the object. Because the information of an object cannot be complete in a single viewing direction, the first two conditions can only determine if a candidate has the same portion as the object. The selected model may still be bigger than the object. To avoid the part-to-whole confusion, in the third phase, a back projection from candidate models is performed to ensure that no unmatched model features become visible when a model is virtually brought to the object‘s orientation. In case multiple models are selected as a result of the insufficient information, disambiguating features and their visible directions are derived to verify the expected feature. In addition to the view independent object recognition under even ambiguous situations, the filtering method has a low computational complexity upper bounded byO(m^2n^2) and lower bounded by O(mn), where m and n are the numbers of model and object features. The three-phase object recognition has been exercised with real and synthesized range images. Experiment results are given in the paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call