Abstract

The control of virtual video game environments through body motion is recently of great interest to academic and industry research groups since it enables many new interactive experiences. With the recent growth in the availability of affordable 3D camera technology, researchers have increasingly investigated the control of games through body and hand gestures. In addition, the dropping cost of MEMS technology has increased the popularity of physical controllers incorporating accelerometers, gyroscopes, and other sensors. Existing work, however, has yet to combine the strengths of a 3D camera with those of a physical game controller to provide six degrees of freedom and one-to-one correspondence between the real-world 3D space and the virtual environment. In this paper, a human-computer interface is presented that allows users to manipulate 3D objects within a virtual space by simultaneously using one hand to perform gestures and the other hand to command a physical controller. This is accomplished by processing the data returned from a custom 3D depth camera to obtain hand gestures along with the absolute position of the controller-wielding hand. Through the use of a composite transformation matrix, this position data is fused with the orientation data measured from the instruments within the controller. The matrix is then applied to a 3D object within a virtual environment in realtime. Two prototype environments that combine hand gestures and a physical controller are used to evaluate this new method of interactive gaming.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call