Abstract

Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.

Highlights

  • Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer’s interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination

  • In this paper, I outline a theory of object-based neural lightness computation occurring within the ventral stream of visual cortex (Figure 1) and apply this theory to problems of gestalt grouping and individual differences in lightness perception

  • The theory includes bottom-up, top-down, and mid-level computations that are identified respectively with: (1) early sensory encoding of local oriented contrast occurring along the pathway from retina to V1; (2) task-specific top-down cortical feedback modulation of the early neural contrast code in V1; and (3) neural circuit computations in V2 that perform functions related to image segmentation

Read more

Summary

Introduction

I outline a theory of object-based neural lightness computation occurring within the ventral stream of visual cortex (Figure 1) and apply this theory to problems of gestalt grouping and individual differences in lightness perception. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer’s interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call