Abstract

Virtual environments are playing an increasingly important role for training people about real world situations, especially through the use of serious games. A key concern is thus the level of realism that virtual environments require in order to have an accurate match of what the user can expect in the real world with what they perceive in the virtual one. Failure to achieve the right level of realism runs the real risk that the user may adopt a different reaction strategy in the virtual world than would be desired in reality. High-fidelity, physically-based rendering has the potential to deliver the same perceptual quality of an image as if you were “there” in the real world scene being portrayed. However, our perception of an environment is not only what we see, but may be significantly influenced by other sensory inputs, including sound, smell, feel, and even taste. Computation and delivery of all sensory stimuli at interactive rates is a computationally complex problem. To achieve true physical accuracy for each of the senses individually for any complex scene in real-time is simply beyond the ability of current standard desktop computers. This paper discusses how human perception, and in particular any cross-modal effects in multi-sensory perception, can be exploited to selectively deliver high-fidelity virtual environments. Selective delivery enables those parts of a scene which the user is attending to, to be computed in high quality. The remainder of the scene is delivered in lower quality, at a significantly reduced computational cost, without the user being aware of this quality difference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call