Abstract

To recreate human movements in a virtual environment in real time, we propose a new method for real-time tracking of 3D virtual full-body motion using a depth-sensing camera. The method uses natural interaction and a non-contact mode. The 3D virtual environment was constructed using a 3D graphics engine and human joint data were calculated using images acquired from a Prime Sense depth-sensing camera. Then skeletal data for the human model in a skinned mesh animation were separated by improving the mesh modules using a 3D graphics engine. Finally, motion data from the depth sensor were combined with joint data for the human model to yield full-body control of a virtual human (VH). Experimental results show that the proposed method can drive VH full-body movements in real time based on motion-sensing data. The method was applied in virtual driving training for agricultural machinery. Trainees can become familiar with the basic operations required for driving agricultural machinery using full-body motion instead of a mouse and keyboard. The training system is inexpensive and has high safety and a strong sense of immersion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call