Abstract

In this article, we will describe our work aiming to realize direct manipulation of 3D virtual objects used as virtual props by a human playing a role of an actor for recording a live video content. For direct manipulation of the 3D virtual props, the actor wears a data glove, which is replaced by a realistic virtual hand in the same 3D shape as the actor's hand using chroma key before the video is delivered to viewers. When the virtual hand replaces the data glove on the actor's hand, its posture is adjusted so that it grasps the virtual props without misalignment. For modeling the virtual hand, the actor's hand is measured by the light stripe triangulation while moving with various postures. The 3D point set obtained as the result is segmented into the points corresponding to the parts of the actor's hand based on the difference in motion among those parts so that the segmented points are integrated into a 3D description of an articulated object. The 3D virtual objects used for virtual props are modeled by recovering the 3D shapes of the real objects by the volume intersection method. The 3D shape recovered by the method is improved using motion of the object and random pattern backgrounds

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call