Abstract

We simultaneously perceive at each visible surface point (at least) a surface color (with several dimensions), an illuminant color, surface orientation, and surface specularity. Perceived surface and lighting variables are a multidimensional function of the past several minutes of chromatic exposure and the current retinal images. Models with outputs that are single-valued functions of image spatial and temporal derivatives (e.g., edge ratio models of Wallach, Land, Horn, Arend) describe sensory processes providing important relational information. However, they are easily shown to be incomplete as models of surface color constancy. Traditional color appearance models are similarly incomplete. This multidimensionality of surface perception was known to early theorists and has been further elaborated by recent human perception and machine vision theorists. The human vision data needed for refinement of computational color constancy models are currently in short supply for both theoretical and methodological reasons. We have recently measured human color constancy using both spatially simple and complex patterns. Our procedure allowed separate measurement of local sensory color (hue, saturation, and brightness) and apparent surface colors (surface chromatic color and lightness). Even in the simplified scenes of our experiments there is a complicated relationship between the sensory color at an image point and the apparent color of the surface perceived on the corresponding sight line.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call