Abstract

It will be highly convenient to have a general-conceptual-method to acquire, compress and store information coming from robot vision systems within manufacturing applications. Such a methodology will try to emulate the human action of human vision in a general-conceptual way the manufacturing workers performs their assembly, sorting or part moving actions within a manufacturing production line. General-conceptual actions of workers includes: primary recognition of objects and working area, training and skill mastering of procedures and tool management. The acquisition of assembly skills by manufacturing robots today , is greatly supported by the effective use of contact force sensing and object recognition vision system, but the performance of industrial robots working in unstructured environments can be improved using such a methodology including visual perception and learning techniques. In this sense, the described object recognition technique in this paper, is accomplished using an artificial neural network (ANN) architecture which receives a descriptive vector called CFD&POSE as the input. This vector represents an innovative methodology for classification and identification of pieces in robotic tasks. The vector compresses 3D object data from assembly parts and it is invariant to scale, rotation and orientation, and it also supports a wide range of illumination levels. The CFD&POSE approach (Mario, 2006) in combination with the fast learning capability of ART networks has been tested and implemented to come as a general- conceptual method for artificial vision which indicates the suitability for industrial robot vision manufacturing tasks applications, this methodology which integrates acquisition, data compression, data storing and object reconstruction potentials, has been called ACSR-Vision Module by the authors, in this paper, an integrated module like this is showed and its potential use in manufacturing applications is demonstrated through experimental results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call