Abstract

Object recognition based solely on spatial characteristics can only hope to provide limited tolerance to variations in appearance. In everyday life natural objects may alter their appearance quite dramatically as a result of changes in viewpoint, distance, orientation and illumination etc. To combat this shortcoming, it has been proposed that the visual system may learn to associate disparate views of objects on the basis of their temporal rather than spatial characteristics. The reasoning behind this suggestion is that views which regularly occur in close temporal proximity are likely to be views of a transforming object. Previous experimental work has confirmed that invariance learning across depth rotations and changes in fixation is affected by the temporal characteristics of stimulus views. In this paper I describe how two other transformation types: fronto-parallel rotation and illumination are also affected by temporal association. Observers viewed sequences of faces undergoing rotation in the image plane or a change in illumination generated by running a light source around the face's vertical mid-line. Unbeknown to the observers, some of the faces changed their identity as the transformation took place. Two experiments were then run to ascertain whether the manipulation had lead to the two endpoint views becoming regarded as valid views of a single face. In the first test participants were required to discriminate true versus mixed identity transformation sequences. In the second, discrimination performance was measured via a two-view, same-different task. Both experiments revealed compelling evidence for the predicted effect of manipulating the temporal characteristics of the face views. The results establish the temporal association mechanism as a general purpose heuristic for coping with a diverse range of invariance learning. They also serve to undermine models of human object recognition which propose the existence of any general purpose view transformation or shape reconstruction system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.