Abstract

Four experiments were conducted to study the nature of visual translation invariance in humans. In all the experiments, subjects were trained to discriminate between a previously unknown target and two non-target distractors presented at a fixed retinal location to one side of the fixation point. In a subsequent test phase, this performance was compared with the performance when the patterns were presented either centrally at the fixation point or at a location on the other side of the fixation point, opposite to the location where the patterns were learned, but where acuity was identical to what it was at the learned location. Two different experimental paradigms were used. One used an eye movement control device (Experiment 1) to ensure the eye could not move relative to the patterns to be learned. In the other three experiments, presentation duration of the patterns was restricted to a short enough period to preclude eye movements. During the training period in Experiments 1 and 2, presentation location of the patterns was centered at 2.4 deg in the periphery, whereas in Experiments 3 and 4 presentation eccentricity was reduced to 0.86 and 0.49 deg. In all four experiments performance dropped when the pattern had to be recognized at new test positions. This result suggests that the visual system does not apply a global transposition transformation to the retinal image to compensate for translations. We propose that, instead, it decomposes the image into simple features which themselves are more-or-less translation invariant. If in a given task, patterns can be discriminated using these simple features, then translation invariance will occur. If not, then translation invariance will fail or be incomplete.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call