Abstract
Land and McCann’s original Retinex theory [1] described how the brain might achieve color constancy by spatially integrating the outputs of edge detector neurons in visual cortex (i.e., Hubel and Wiesel cells). Given a collection of reflective surfaces, separated by hard edges (a Mondrian stimulus) and viewed under uniform illumination, Retinex first computes luminance ratios at the borders between surfaces, then multiplies these ratios along image paths to compute the relative ratios of noncontiguous surfaces. This multiplication is equivalent to summing steps in log luminance. Here I review results from the human lightness literature supporting the key Retinex assumption that biological lightness computations involve a spatial integration of steps in log luminance. However, to explain perceptual data, the original Retinex algorithm must be supplemented with additional perceptual principles that together determine the weights given to particular image edges. These include: distance-dependent edge weighting, different weights for incremental and decremental luminance steps, contrast gain acting between edges, top-down control of edge weights, and computations in object-centered coordinates. I outline a theory, informed by recent findings from neurophysiology, of how these computations might be carried out by neural circuits in the ventral stream of visual cortex.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.