Abstract

Over the past 40 years, psychophysical and physiological research on motion perception has spawned a large number of computational models. Standard ‘low-level’ models extract motion directly from the time-varying luminance profile of the image, but are widely believed to be ‘blind’ to motion in ‘second-order’ stimuli (those in which motion is carried by cues such as texture variation). Of particular interest are a class of ‘microbalanced’ second-order stimuli in which the expected distribution of the power spectrum contains no directional information. Humans are able to see motion in microbalanced stimuli, leading to the proposal that we possess specialized mechanisms for detecting second-order motion. A recent paper by Benton and Johnston [ 1 Benton C.P. Johnston A. A new approach to analysing texture-defined motion. Proc. R. Soc. London B Biol. Sci. 2001; 268: 2435-2443 Crossref Scopus (24) Google Scholar ] suggests that a re-evaluation of this proposal could be warranted. By analysing sequences of moving images in terms of local spatial and temporal gradients, they show that the motion of luminance-based and microbalanced stimuli can in principle be detected by a single mechanism.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.