Abstract

To improve the efficiency of interaction and user experience in desktop-based Virtual Assembly System (VAS), this paper is proposed to establish low-cost but strong immerse virtual assembly experiment system using Kinect and Unity3D engine. For the purpose of employing body motion, gesture and voice to accomplish interaction tasks, establishing assembly model data, hand gesture recognition and changing virtual assembly scene view by body motion were three key issues, so "treelike hierarchy modeling", "gesture semantics extension" and "multidimensional head tracking" methods were proposed accordingly, and the implementation process was discussed. A Speed Reducer Virtual Assembly System (SRVAS) was designed and realized by these methods, and practical results show that the method of gesture recognition has a good accuracy and robustness, and is almost unaffected by light and complex background. Kinect somatosensory interaction can improve interaction efficiency and enhance user immersion with low-cost in 3D virtual system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call