Abstract

A highly accurate motion segmentation technique for time-varying mesh (TVM) is presented. In conventional approaches, motion of the objects was analyzed using shape feature vectors extracted from TVM frames. This was because it was very difficult to locate and track feature points in the objects in the 3D space due to the fact that the number of vertices and connection varies each frame. In this study, we developed an algorithm to analyze the objects' motion in the 3D space using the spherical registration based on the iterative closest-point algorithm. Rough motion tracking is conducted and the degree of motion is robustly calculated by this method. Although the approach is straightforward, much better motion segmentation results than the conventional approaches are obtained by yielding such high precision and recall rates as 95% and 92% on average.

Highlights

  • Three-dimensional (3D) geometric modeling of human appearance and motion based on computer vision techniques [1,2,3,4,5,6,7] is getting much more attention as ultimate interactive multimedia

  • 3D scene generation based on image-based rendering (IBR) [8,9,10,11,12,13,14,15,16] is very popular because a scene from imaginary cameras can be obtained very fast without estimating the 3D shape of the objects, 3D geometric modeling has some attractive features: (1) the number of cameras is much smaller than that in IBR, (2) 3D models can be seen from any view points and provide us “more” free view-point video than IBR, (3) it is compatible with augmented reality (AR) technology, and so on

  • There are some variations in 3D video data structure. 3D video discussed in this paper is defined as sequential 3D mesh models composed of three kinds of data such as position of vertices, their connection, and color of each vertex

Read more

Summary

Introduction

Three-dimensional (3D) geometric modeling of human appearance and motion based on computer vision techniques (i.e., using only multiple cameras) [1,2,3,4,5,6,7] is getting much more attention as ultimate interactive multimedia. 3D video discussed in this paper is defined as sequential 3D mesh models composed of three kinds of data such as position of vertices, their connection, and color of each vertex. Hereafter, we call such data as time-varying mesh (TVM). In contrast with computer-graphics based 3D mesh animation called dynamic mesh or dynamic geometry, one of the most important features in TVM is that the number of vertices and topology changes every frame due to the nonrigid nature of human body and clothes. Each frame is generated independently regardless of its neighboring frames This makes data processing for TVM much more challenging

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.