Abstract
When we look at the world—or a graphical depiction of the world—we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance—based on a boarder theoretical framework called gamut relativity—that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.
Highlights
The human visual system manifests the remarkable capacity to identify surface materials from the complex patterns of light reaching the eye [1,2]
We have presented a model that quantitatively accounts for perceptual data relating to some of the most striking and theoretically important effects of layered perceptual representation and surface appearance reported in the literature
The model: (1) provides the first unified analysis of how the visual system represents surfaces independently of shadows and atmospheric media, as exemplified in the Adelson checkerboard and Anderson-Winawer effects; (2) reconciles and unifies two prominent theories of surface lightness; (3) quantitatively predicts how stimulus- and task-driven factors combine to control brightness/lightness matching behaviours reported in published perceptual experiments; (4) unifies two previously published gamut relativity models, aimed at explaining properties of brightness/lightness perception [52,54], lightness/ transparency perception [55] and lightness/gloss perception [56], respectively
Summary
The human visual system manifests the remarkable capacity to identify surface materials from the complex patterns of light reaching the eye [1,2]. This capacity is exploited in the computer graphics industry to create convincing renderings of surface materials based on physical models of ‘light transport’ [3,4,5]. The net result is that the light patterns reaching the eye from a rendered image consist of a mixture of physically modelled causes. How the human visual system parses such images into separate material, illumination and atmospheric layers remains a challenging problem in both human vision science and computer vision science
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have