Abstract

Current theories of second-order motion perception postulate that such motion is detected by either a high-level mechanism which computes the temporal correspondences between “features” extracted from the image, or low-level motion mechanisms which operate on a nonlinear, neural transformation of the luminance profile of the image. Theories which favour the latter strategy either suggest that first- and second-order motion are detected by a common mechanism or else that distinct mechanisms exist for the two types of motion, both operating on similar principles. The aim of this study was to differentiate between these possibilities. Observers were required to judge the direction of multiframe motion sequences in which the frames alternated between sinusoidal variations in luminance (first order) and similar variations in contrast (second order). On each frame the modulation signal was displaced by some fraction of its spatial period. The motion sequences were designed such that integration of both types of frame (first and second order) would lead to unambiguous motion in a particular direction whilst separate analysis of first- or second-order frames alone would yield ambiguous motion. The results show clearly that observers were unable to integrate the first- and second-order frames of such motion sequences. However, when observers were presented with motion sequences in which the frames alternated between two, different types of second-order image (variations in the contrast or size of the elements constituting a random noise field) perceived direction was always consistent with integration of both image types. This is taken as support for models that suggest that first- and second-order motion are processed by distinct mechanisms in the visual system and that each mechanism is only sensitive to one type of motion. It is suggested that several varieties of second-order motion stimuli may be regarded as equivalent to contrast-modulated images when considered in terms of the effects of local spatiotemporal filtering operations carried out by the human visual system. In this respect, our results are consistent with the “texture grabber” concept of Werkhoven, Sperling and Chubb [(1993) Vision Research, 33, 463–485].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call