Abstract

Synthetic imagery used for training and evaluating visual search and detection tasks should result in the same observer performance as obtained in the field. The generation of synthetic imagery generally involves a range of computational approximations and simplifications of the physical processes involved in the image formation, in order to meet the update rates in real-time systems or simply to achieve reasonable computation times. These approximations reduce the fidelity of the resulting imagery. This in turn affects observer performance. We have recently introduced visual conspicuity as an efficient task-related measure that can be deployed to calibrate synthetic imagery for use in human visual search and detection tasks. Target conspicuity determines mean visual search time. Targets in synthetic imagery with the same visual conspicuity as their real world counterparts will give rise to an observer performance in simulated search and detection tasks that is similar to the performance in equivalent real world scenarios. In the present study we compare the conspicuity and the detection ranges of real and simulated targets with different degrees of shading. When ambient occlusion is taken into account, and when the contrast ratios in a scene are calibrated, the detection ranges and conspicuity values of simulated targets are equivalent to those of their real-world counterparts, for different degrees of shading. When no or incorrect shading is applied in the simulation, this is not the case, and the resulting imagery can not be deployed for training visual search and detection tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call