Abstract

To maintain color constancy, the human visual system must distinguish surface reflectance-based variations in wavelength and luminance from variations due to illumination. Edge integration theory proposes that this is accomplished by spatially integrating steps in luminance and color contrast that likely result from reflectance changes. Thus, a neural representation of relative reflectance within the visual scene is constructed. An anchoring rule-the largest reflectance in the neural representation appears white-is then applied to map relative lightness onto an absolute lightness scale. A large body of data on human lightness judgments is here shown to be consistent with an edge integration model in which the visual system performs a weighted sum of steps in log luminance across space. Three hypotheses are proposed regarding how weights are applied to edges. First, weights decline with distance from the target surface whose lightness is being computed. Second, larger weights are given to edges whose dark sides point towards the target. Third, edge integration is carried out along a path leading from a common background field, or surround, to the target location. The theory accounts for simultaneous contrast; quantitative lightness judgments made with classical disk-annulus, Gilchrist dome, and Gelb displays; and perceptual filling-in lightness. A cortical theory of lightness in the ventral stream of visual cortex (areas V1 → V4) is proposed to instantiate the edge integration algorithm. The neural model is shown to be capable of unifying the quantitative laws of edge integration in lightness perception with the laws governing brightness, including Stevens' power law brightness model, and makes novel predictions about the quantitative laws governing induced darkness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call