Abstract

The color we perceive at each point in an image depends on information spread across the three spatial arrays of cone photoreceptors. I describe experiments aimed at clarifying how information is integrated across the spatial arrays to yield a color experience. We have found that changes of color appearance due to changes of the ambient illumination and the pattern's spatial frequency can be described by using a simple set of optical and neural transformations. Each transformation can be thought of as having two parts. First, the transformation converts the color representation into a new coordinate frame that is independent of the image contents. Second, the transformation scales the neural responses in the new coordinate frame by a gain factor that depends on the image contents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call