Abstract

Motion capture technologies reconstruct human movements and have wide-ranging applications. Mainstream research on motion capture can be divided into vision-based methods and inertial measurement unit (IMU)-based methods. The vision-based methods capture complex 3D geometrical deformations with high accuracy, but they rely on expensive optical equipment and suffer from the line-of-sight occlusion problem. IMU-based methods are lightweight but hard to capture fine-grained 3D deformations. In this work, we present a configurable self-sensing IMU sensor network to bridge the gap between the vision-based and IMU-based methods. To achieve this, we propose a novel kinematic chain model based on the four-bar linkage to describe the minimum deformation process of 3D deformations. We also introduce three geometric priors, obtained from the initial shape, material properties and motion features, to assist the kinematic chain model in reconstructing deformations and overcome the data sparsity problem. Additionally, to further enhance the accuracy of deformation capture, we propose a fabrication method to customize 3D sensor networks for different objects. We introduce origami-inspired thinking to achieve the customization process, which constructs 3D sensor networks through a 3D-2D-3D digital-physical transition. The experimental results demonstrate that our method achieves comparable performance with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call