Abstract

Previous works in openpose are only applicable in 2D human pose estimation. 3D human pose estimation can be done using two types of inputs, single and multi view. 3D pose estimation using multi view are more robust than single view, due to multi view allowing better depth estimation. 3D pose estimation are obtained from 3D data set poses or 2D joint location poses. However, with the limitations inherent in the 3D data set, such as lack of sufficient data and usage difficulties, this research applies 2D joint location. From the cases outlined before, we choose to develop multi view camera and 2D joint location as input to obtain 3D human motion capture. The inputs are two images with different views, where each image is processed using an inference openpose model to get the 2D joint location. Camera calibration is needed to precisely obtain the intrinsic and extrinsic features of the camera. With these features and the 2D joint location results, we obtain the 3D motion capture using the triangulation method. This system can be carried out on any combination of genders, apparels and poses. The best distance between cameras is 66 cm. The error depth range is 16.3-18.7 cm. Error in depth can be minimized by improving 2D joint location from openpose for obtain better 3D motion capture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call