Abstract

Three-dimensional (3D) reconstruction of a moving object is a hotpot in structured-light research. However, large-scale motion causes sparse point cloud density and low reconstruction accuracy in a conventional structured-light system (SLS) because the field-of-view (FOV) of the camera is enlarged to prevent object loss. In this paper, we propose a gaze tracking 3D reconstruction system (GT3DRS), which utilizes a saccade mirror to construct a dynamic relationship between the camera and projector for an object with large-scale motion. In the GT3DRS, the real-time position of the moving object is obtained using a tracking algorithm, and a saccade mirror is adopted to keep the moving object constantly located at the center of the FOV according to the position, which enables the camera’s FOV to compactly cover the object. The GT3DRS increases the resolution of the object, effectively enhancing the point cloud density of the reconstructed 3D profile and improving the reconstruction accuracy. In addition, a new calibration framework is established for the proposed GT3DRS, which reduces the system errors by introducing an assumption of nonideal installation and improves calibration efficiency by building an extrinsic parameter transformation model (EPTM) to describe the dynamic relationship. Experimental results verify that the point cloud density is increased by 5.2 times, and the reconstruction accuracy is improved by an average of 10.6 times compared with a conventional SLS. Moreover, the tracking reconstruction video sequence reaches 15 fps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call