Abstract

A neural model is proposed of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models clarify how the brain can compute relative contrast. The anchored Filling-In Lightness Model (aFILM) clarifies how the brain 'anchors' lightness percepts to determine an absolute lightness scale that uses the full dynamic range of neurons. The model quantitatively simulates lightness anchoring properties (Articulation, Insulation, Configuration, Area Effect) and other lightness data (discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, Craik-O'Brien-Cornsweet illusion). The model clarifies how retinal processing stages achieve light adaptation and spatial contrast adaptation, and how cortical processing stages fill-in surface lightness using long-range horizontal connections that are gated by boundary signals. The new filling-in mechanism runs 1000 times faster than diffusion mechanisms of previous filling-in models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call