Abstract

Digital fringe projection profilometry often faces a trade-off between measurement accuracy and efficiency. Defocus technology is commonly employed to address this challenge and improve the efficiency of high-speed three-dimensional (3D) measurement. This technology uses 1-bit binary fringe patterns instead of traditional 8-bit sinusoidal patterns, but accurately measuring 3D shapes with both high speed and accuracy remains a challenge due to defocus errors. These errors are introduced by the manual adjustment of lens focal length and reduce both fringe pattern quality and measurement accuracy. To overcome this limitation, we propose a multi-stage generative adversarial network with a self-attention mechanism to correct inaccurate fringe patterns and transform them into more ideal sinusoidal fringe patterns. Our generation network comprises a multi-stage feature extraction network with a self-attention mechanism and an encoder-decoder network. A multi-stage network integrating residual and transformer modules is constructed to mine global feature information. The self-attention mechanism accurately detects key areas for correction, and the encoder-decoder network generates rectified sinusoidal fringe patterns by combining the feature information with the attention area. We use a discriminative network to evaluate whether the output of the generative network is good enough to be true. In our experiments, we considered different fringe widths and measured objects of various types and colors. The results show that our proposed method improves the quality of defocus fringe patterns and the accuracy of subsequent 3D reconstruction compared to existing direct defocus methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call