Abstract

There has been an enormous increase of 3D human motion data in various fields, such as 3D gaming (such an EA sports) and medical fields (physical medicine and rehabilitations). We need an effective content-based 3D human motion retrieval scheme supporting human-level language queries. However, there is a big semantic gap between these two media since the 3D Human motion data and text are heterogeneous forms. In this paper, we propose a cross-media retrieval framework that reduces the semantic gap by semantic spatiotemporal dimensional reduction and reformulates 3D human motion data to HMDoc (Human Motion Document) representation, which is quite applicable for a traditional information retrieval technique such as Latent Semantic Indexing. After mapping complex 3D human motion matrix data into semantic space, we can achieve 88.72% precision, 86.98% recall accuracy with 14 different motion categories that consists of 370, 294 frames. Our proposed approach (HMDoc) extracts the semantic characteristics of human motion capture data. This semantic feature compact representation outperformed other works such as weighted motion feature vector and LB_KEOGH's method, Geometric feature representation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call