Abstract

By displaying slightly disparate video images of a scene separately to each eye, three-dimensional (3-D) viewing systems provide depth cues in addition to those that are available with 2-D viewing. Unfortunately, significant performance advantages with 3-D viewing have only sometimes been realized. There is evidence of a performance advantage with 3-D viewing in remote manipulation tasks, in which the cameras view the remote scene from a stable position. However, there is little information available on other tasks that involve the viewing of moving stimuli. Many teleoperation applications involve the viewing of scenes in which movement occurs, either because objects in the remote environment are dynamic or because the cameras move through space on a mobile robotic platform. The purpose of the present research is to experimentally examine the interactions between selected 3-D viewing system parameters and the ability to perceive depth with static and moving stimuli. The results will provide basic data on the extent to which static and moving stimuli require similar adjustments of a 3-D viewing system in order to produce optimal performance. Subjects viewed video-taped scenes in which two flat, rectangular targets appeared. One target was oriented vertically; the other was oriented horizontally. Subjects were asked to judge the extent to which the targets were aligned or offset in depth. The targets were presented at a range of actual offsets. In some conditions both targets were static, while in other conditions the horizontal target moved in a plane perpendicular to the cameras. Lighting, size and linear perspective cues to depth were controlled so that the depth judgements had to be made on the basis of retinal disparity. Subjects' accuracy of judging small differences in depth, at three different target-to-camera distances, was determined as two viewing system parameters were manipulated. Optical base (i.e., the separation of the cameras) and convergence of the cameras was varied independently. Of interest was both the levels of accuracy that can be attained with static versus moving stimuli and any differences in the combination of optical base and convergence that lead to optimal performance with static versus moving stimuli. The data collected here will prove useful in the development of a remote viewing system that, like human binocular vision, can dynamically change optical base and convergence as appropriate for the scene being viewed. Such a system should improve the operator's sense of telepresence in remote environments and thereby enhance the performance efficiency and operational safety of teleoperated systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call