Abstract

AbstractWe present a system for interacting with 3D objects in a 3D virtual environment. Using the notion that a typical head‐mounted display (HMD) does not cover the user's entire face, we use a fiducial marker placed on the HMD to locate the user's exposed facial skin. Using this information, a skin model is built and combined with the depth information obtained from a stereo camera. The information when used in tandem allows the position of the user's hands to be detected and tracked in real time. Once both hands are located, our system allows the user to manipulate the object with five degrees of freedom (translation in x‐, y‐, and z‐ axis with roll and yaw rotations) in virtual three‐dimensional space using a series of intuitive hand gestures. Copyright © 2009 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call