Abstract

Reproducing the perception of a real-world scene on a display device is a very challenging task which requires the understanding of the camera processing pipeline, the display process, and the way the human visual system processes the light it captures. Mathematical models based on psychophysical and physiological laws on color vision, named Retinex, provide efficient tools to handle degradations produced during the camera processing pipeline like the reduction of the contrast. In particular, Batard and Bertalmío (in J Math Imaging Vis 60(6):849–881, 2018) described some psychophysical laws on brightness perception as covariant derivatives, included them into a variational model, and observed that the quality of the color image correction is correlated with the accuracy of the vision model it includes. Based on this observation, we postulate that this model can be improved by including more accurate data on vision with a special attention on visual neuroscience here. Then, inspired by the presence of neurons responding to different visual attributes in the area V1 of the visual cortex as orientation, color or movement, to name a few, and horizontal connections modeling the interactions between those neurons, we construct two variational models to process both local (edges, textures) and global (contrast) features. This is an improvement with respect to the model of Batard and Bertalmío as the latter cannot process local and global features independently and simultaneously. Finally, we conduct experiments on color images which corroborate the improvement provided by the new models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call