Abstract
Content-based human motion capture (MoCap) data retrieval facilitates reusing motion data that have already been captured and stored in a database. For a MoCap data retrieval system to get practically deployed, both high precision and natural interface are demanded. Targeting both, we propose a video-based human MoCap data retrieval solution in this work. It lets users to specify a query via a video clip, addresses the representational gap between video and MoCap clips and extracts discriminative motion features for precise retrieval. Specifically, the proposed scheme firstly converts each video clip or MoCap clip at a certain viewpoint to a binary silhouette sequence. Regarding a video or MoCap clip as a set of silhouette images, the proposed scheme uses a convolutional neural network, named MotionSet, to extract the discriminative motion feature of the clip. The extracted motion features are used to match a query to repository MoCap clips for the retrieval. Besides the algorithmic solution, we also contribute a human MoCap dataset and a human motion video dataset in couple that contain various action classes. Experiments show that our proposed scheme achieves an increase of around 0.25 in average MAP and costs about 1/26 time for online retrieval, when compared with the benchmark algorithm.
Highlights
There is a growing demand for motion capture (MoCap) technology in many fields including interactive virtual reality, film production, animation and so forth
We propose a video-based human MoCap data retrieval scheme, which takes as input a video clip and retrieves similar MoCap clips from the repository
2) EXPERIMENTAL RESULTS Each of the 400 MoCap clips in our human MoCap dataset is converted to 4 binary silhouette sequences at 4 viewpoints, respectively, by the method in Sec
Summary
There is a growing demand for motion capture (MoCap) technology in many fields including interactive virtual reality, film production, animation and so forth. Semantic analysis of text labels is required for precise motion retrieval, which itself is a challenging task This has motivated intensive research on content-based retrieval of MoCap data instead, and we take this line as well in this work. Content-based human MoCap data retrieval has drawn lots of research attention in recent years with many good algorithms proposed. In these algorithms, various modalities of query have been used, which include MoCap clip [1]–[11], hand-drawn sketch [12]–[14], puppet motion [15], [16], Kinect skeleton motion [5] and video clip [17]–[19].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.