Purpose This paper aims to present an unmarked method including entire two-dimensional (2D) and three-dimensional (3D) methods to recover absolute 3D humanoid robot poses from multiview images. Design/methodology/approach The method consists of two separate steps: estimating the 2D poses in multiview images and recovering the 3D poses from the multiview 2D heatmaps. The 2D one is conducted by High-Resolution Net with Epipolar (HRNet-Epipolar), and the Conditional Random Fields Humanoid Robot Pictorial Structure Model (CRF Robot Model) is proposed to recover 3D poses. Findings The performance of the algorithm is validated by experiments developed on data sets captured by four RGB cameras in Qualisys system. It illustrates that the algorithm has higher Mean Per Joint Position Error than Direct Linear Transformation and Recursive Pictorial Structure Model algorithms when estimating 14 joints of the humanoid robot. Originality/value A new unmarked method is proposed for 3D humanoid robot pose estimation. Experimental results show enhanced absolute accuracy, which holds important theoretical significance and application value for humanoid robot pose estimation and motion performance testing.
Read full abstract