Abstract

Human perception of depth and motion is strongly influenced by pictorial cues. These cues affect very basic neuronal processing structures, and lead to a modulated perception of distance and velocity on a higher perceptual level. Thus, it is proposed to convey a previously acquired distance and velocity information by manipulating or introducing motion and monocular depth cues in images and video streams to support human visual scene assessment. The addressed techniques utilize artificial depth of field, exaggerated motion blur, and color coded risk potential renderings. Except of the latter, these renderings are designed to maintain the natural look of the input material as much as possible. As a result, the supplemental distance and velocity information is conveyed to the viewer in a natural but distinct way. Previous to the rendering of depth and velocity cues in images and video material, this information has to be acquired first. Therefore, techniques for the scene reconstruction are handled in the first section, followed by the description of the newly presented rendering methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call