Abstract

Little is known about the way in which the outputs of early orientation-selective neurons are combined. One particular problem is that the number of possible combinations of these outputs greatly outweighs the number of processing units available to represent them. Here we consider two of the possible ways in which the visual system might reduce the impact of this problem. First, the visual system might ameliorate the problem by collapsing across some low-level feature coded by previous processing stages, such as spatial frequency. Second, the visual system may combine only a subset of available outputs, such as those with similar receptive field characteristics. Using plaid-selective contrast adaptation and the curvature aftereffect, we found no evidence for the former solution; both aftereffects were clearly tuned to the spatial frequency of the adaptor relative to the test probe. We did, however, find evidence for the latter with both aftereffects; when the components forming our compound stimuli were dissimilar in spatial frequency, the effects of adapting to them were substantially reduced. This has important implications for mid-level visual processing, both for the combinatorial explosion and for the selective "binding" of common features that are perceived as coming from a single visual object.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.