Abstract

Graphs and graph matching are powerful mechanisms for knowledge representation, pattern recognition and machine learning. Especially in computer vision their application is manifold. Graphs can characterize relations among image features like points or regions but they may also represent symbolic object knowledge. Hence, graph matching can accomplish recognition tasks on different levels of abstraction. In this contribution, we demonstrate that graphs may also bridge the gap between different levels of knowledge representation. We present a system for visual assembly monitoring that integrates bottom-up and top-down strategies for recognition and automatically generates and learns graph models to recognize assembled objects. Data-driven processing is subdived into three stages: first, elementary objects are recognized from low-level image features. Then, clusters of elementary objects are analyzed syntactically; if an assembly structure is found, it is translated into a graph that uniquely models the assembly. Finally, symbolic models like this are stored in a database so that individual assemblies can be recognized by means of graph matching. At the same time, these graphs enable top-down knowledge propagation: they are transformed into graphs which represent relations between image features and thus describe the visual appearance of the recently found assembly. Therefore, due to model-driven knowledge propagation assemblies may subsequently be recognized from graph matching on a lower computational level and tedious bottom-up processing becomes superfluous.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call