The efficiency and precision of 3D shape reconstruction has been a focus in the fringe projection profilometry (FPP). However, achieving high-quality 3D measurement for isolated or overlapping objects from single fringe image is still a challenging task in the field. In this paper, a binocular composite grayscale fringe projection profilometry (BCGFPP) based on deep learning is proposed, in which a two-stage one-to-three network (TONet) is trained to predict the images required for phase unwrapping. The obtained absolute phase map exhibits high precision and eliminates deviation error and periodic ambiguity typically encountered in traditional sinusoidal composite fringe coding scheme. Haar transform principle is employed to form a Haar-like composite fringe pattern (HCFP), which consists of three different frequencies, serving as the input. TONet architecture is designed to predict images required by the tri-frequency four-step phase-shifting method (TFPM). Further, the absolute phase is calculated and the disparity map is obtained by matching the absolute phase of the left and right cameras. Finally, the 3D shape of the object can be restored by the system calibration parameters. Experimental results demonstrate the approach can greatly reduce the number of fringes required and achieve the accuracy of the absolute phase close to the training set.
Read full abstract