Abstract

It seems clear that our progress in understanding the neural subsystems underlying form and pattern vision will depend to a large extent on precise formulations of the manner in which forms and patterns are perceived in the first place. Over the last 20 years, two different modes of image processing have received the most attention. The first approach is known as hierarchical feature processing. According to this view, the visual system analyzes images by detecting the presence of local features (e.g., edges, line segments, and corners). Recognition then depends on the assimilation of these features into higher order groups that represent the boundaries of objects present in the scene. Interpretations of stimulus-induced activity of cortical cells in terms of feature detectors is consistent with this view, although it is generally recognized that individual cortical cells do not behave as true feature detectors (since, for example, there is a trade-off between orientation and contrast such that an optimally oriented bar of low contrast can produce the same response as a higher contrast bar in a different orientation). In any case, the notion of hierarchical feature detection logically leads to the idea that complicated percepts are encoded by individual neurons located at a “high level” in the visual system. Despite some evidence for such neurons in the inferotemporal cortex of monkeys, this approach is generally regarded as too cumbersome to provide the neural substrate of pattern and form vision in general, although this might be the way in which images of particular ecological relevance to an animal might be encoded.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call