Abstract

Recognition of a tactile image independent of position, size and orientation has been a goal of much recent research. Many tasks (e.g. parts identification) often give rise to situations which demand a more generalized methodology than the derivation of a single forward measurement, such as the computation of part area and perimeter from its run‐length‐coding representation. In this situation, an interpretation procedure generally adopts the techniques and methodology of a pattern recognition approach. To achieve maximum utility and flexibility, the methods used should be sensitive to any image change in size, translation and rotation, and should provide good repeatability. The algorithm used in this article generally meets these conditions. The results show that recognition schemes based on these invariants are position, size and orientation independent, and also flexible enough to learn most sets of parts. Assuming that parts can vary only in location, orientation and size, then certain moments are very convenient for normalization. For instance, the first moments of area give the centroid of a part, which is a natural origin of co‐ordinates for translation invariant measurements. Similarly, the eigenvectors of the matrix of second central moments define the directions of principal axes, which leads to rotation moment invariant measurements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.