Abstract

This paper describes a computer program for calculating the contrast image on the human retina from an array of scene luminances. We used achromatic transparency targets and measured test target's luminances with meters. We used the CIE standard Glare Spread Function (GSF) to calculate the array of retinal contrast. This paper describes the CIE standard, the calculation and the analysis techniques comparing the calculated retinal image with observer data. The paper also describes in detail the techniques of accurate measurements of HDR scenes, conversion of measurements to input data arrays, calculation of the retinal image, including open source MATLAB code, pseudocolor visualization of HDR images that exceed the range of standard displays, and comparison of observed sensations with retinal stimuli.

Highlights

  • Psychophysical experiments require measurements of the light coming from the scene to the observers’ eyes

  • In order to model human response to light we need to understand the sequence of input stimuli at each stage in the visual pathway: 1. The light coming from objects (Array of luminances from the scene) 2

  • The input and output of the Glare Spread Function (GSF) MATLAB code is a pair of integer arrays

Read more

Summary

Introduction

Psychophysical experiments require measurements of the light coming from the scene to the observers’ eyes. This data includes the luminance and the angular subtense of each scene element. The quanta catch of retinal receptors is the combination of scene luminances and optical distortions such as glare. Research in vision and photography over the past 150 years has refined our understanding of these generalizations Studies of both adaptation of photoreceptor sensitivity (Dowling, 1978), and the important role of spatial neural interactions (McCann and Rizzi, 2012) have shown that the quanta catch of a single photoreceptor does not generate a unique sensation. The retinal contrast values are calculated using the code included below

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call