Abstract

Two models for visual pattern recognition are described; the one based on application of internal compensatory transformations to pattern representations, the other based on encoding of patterns in terms of local features and spatial relations between these local features. These transformations and relational-structure models are each endowed with the same experimentally observed invariance properties, which include independence to pattern translation and pattern jitter, and, depending on the particular versions of the models, independence to pattern reflection and inversion (180 degrees rotation). Each model is tested by comparing the predicted recognition performance with experimentally determined recognition performance using as stimuli random-dot patterns that were variously rotated in the plane. The level of visual recognition of such patterns is known to depend strongly on rotation angle. It is shown that the relational-structure model equipped with an invariance to pattern inversion gives responses which are in close agreement with the experimental data over all pattern rotation angles. In contrast, the transformation model equipped with the same invariances gives poor agreement to the experimental data. Some implications of these results are considered.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.