Abstract

The significant amount of research devoted to model-based vision has not been widely accepted in industrial environments because of the rapid throughput rates typically imposed by industry and the high costs of specialized hardware and prototyping. The model-based vision system described performs model-based reasoning at real-time (or near real-time) rates and with low hardware and prototyping costs. A set of useful features is extracted from observed models using a library of feature-extraction operators. Scale and orientation-invariant combinations of these features are used as indices in a hardware lookup table to establish initial correspondence between similar combinations that will be encountered when examining unknown objects. From initial recognition of an unknown object, evidence for an object in a particular spatial pose is accumulated, giving an initial set of hypotheses. The strongest hypotheses are refined by iteratively hypothesizing previously uninstantiated model/object feature matches and computing a confidence measure associated with the current instantiation set. If confidence increases, the newly hypothesized instantiation is retained; otherwise, it is discarded. The system has been implemented and tested on a SunSparc 2 workstation hosting a DataCube image processing system using a variety of model objects and feature-extraction primitives. Results show that real-time operation and rapid prototyping are achievable, particularly where there are at least a few nonambiguous model-feature combinations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call