Abstract

The brain analyses the visual world through the luminance patterns that reach the retina. Formally, luminance (as measured by the retina) is the product of illumination and reflectance. Whereas illumination is highly variable, reflectance is a physical property that characterizes each object surface. Due to memory constraints, it seems plausible that the visual system suppresses illumination patterns before object recognition takes place. Since many combinations of reflectance and illumination can give rise to identical luminance values, finding the correct reflectance value of a surface is an ill-posed problem, and it is still an open question how it is solved by the brain. Here we propose a computational approach that first learns filter kernels (“receptive fields”) for slow and fast variations in luminance, respectively, from achromatic real-world images. Distinguishing between luminance gradients (slow variations) and non-gradients (fast variations) could serve to constrain the mentioned ill-posed problem. The second stage of our approach successfully segregates luminance gradients and non-gradients from real-world images. Our approach furthermore predicts that visual illusions that contain luminance gradients (such as Adelson’s checker-shadow display or grating induction) may occur as a consequence of this segregation process.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.