Abstract

This paper proposes a new approach that acquires camera parameters and generates an integrated 3D joint using an RGB-D camera network distributed in an arbitrary location in space. The proposed technique consists of three steps. In the first step, camera parameters are calculated using partial joints as feature points. The internal and external parameters between cameras are not calculated using a specially manufactured calibration plate. In the second step, a 3D joint set is estimated by integrating the joints obtained from each camera using the calculated camera parameters. At the same time, a 3D volumetric model in the form of a point cloud is reconstructed. The third step consists of a joint correction algorithm. The resultant 3D joint with high reliability is estimated by correcting the position of the joint using the 3D point cloud reconstructed previously. The generated 3D joint can accurately express the shape and movement of 3D human. The estimated 3D joint was compared with the joint measured using a motion capture device to evaluate its performance. The temporal standard deviation between two measurements shows very low value from 1.966 mm to 7.99 mm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call