Abstract

We describe new techniques for interactive input and manipulation of three-dimensional data using a motion tracking system combined with an auto stereoscopic display. New and intuitive methods of 3D interaction devices and methods are necessary because it is unintuitive and difficult to adopt 2D interaction methods for manipulating stereoscopic 3D data. We describe new interaction techniques that run in real-time with a wide array of commercially available autostereoscopic displays. Our interaction system enables 3D interaction with auto stereoscopic displays using computer vision algorithms and a 3D cursor interaction device. Two hardware synchronized FireWire video cameras track a user's hand motion in space with the help of a handheld small 3D cursor utilizing a light pen or other small light source. Software analyzes the 3D tracking data and creates interactive computer commands for manipulating objects in virtual space. Software then draws a 3D scene and interlaces multiple images of this 3D scene for driving an autostereoscopic display. The 3D cursor is tracked within a separate interaction space, so that users interact with images appearing both inside and outside the display. With multi-view auto stereoscopic displays, multiple users can see the interaction and interact with the display at the same time. We describe two multiple camera image acquisition and multi-view image display models. These models relate a user's hand motions with the cursor to the image he perceives on the autostereoscopic display. Our mathematical analysis also describes different autostereoscopic image preparation (i.e., interlacing or interzig) methods and shows the relationship of these methods to our image acquisition and display models. We also report the results of user tests we have performed with different display technologies and interaction methods. We have measured interaction performance by recording subjects' task completion time, efficiency (which is inversely proportional to the object manipulation path length) and depth judgment accuracy. We have found that the interaction performance depends on the application, the display type, interaction device, constraints on user motion, display viewing area, familiarity with the displays and gender of the test subject. We have also found with careful selection of above variables, auto stereoscopic displays can perform almost as well as glasses-based methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call