Abstract

In many optical metrology techniques, fringe pattern analysis is the central algorithm for recovering the underlying phase distribution from the recorded fringe patterns. Despite extensive research efforts for decades, how to extract the desired phase information, with the highest possible accuracy, from the minimum number of fringe patterns remains one of the most challenging open problems. Inspired by recent successes of deep learning techniques for computer vision and other applications, here, we demonstrate for the first time, to our knowledge, that the deep neural networks can be trained to perform fringe analysis, which substantially enhances the accuracy of phase demodulation from a single fringe pattern. The effectiveness of the proposed method is experimentally verified using carrier fringe patterns under the scenario of fringe projection profilometry. Experimental results demonstrate its superior performance in terms of high accuracy and edge-preserving over two representative single-frame techniques: Fourier transform profilometry and Windowed Fourier profilometry.

Highlights

  • A group of image residual blocks Mðx; n1⁄41 yÞ sin δn N 2Bðx; sin φðx; yÞ; (6) Dðx; n1⁄40cos δn cos φðx; yÞ: (7)The expressions above show that the numerator Mðx; yÞ and the denominator Dðx; yÞ are closely related to the original fringe pattern in Eq (1) through a quasilinear relationship with the background image Aðx; yÞ

  • We demonstrate experimentally for the first time, to our knowledge, that the use of a deep neural network can substantially enhance the accuracy of phase demodulation from a single fringe pattern

  • Since one of the inputs of Convolutional neural network 2 (CNN2) is the output of Convolutional neural network 1 (CNN1), CNN1 was trained first and CNN2 was trained with the predicted background intensities and captured fringe patterns

Read more

Summary

Background

Instead of estimating the phase directly, our deep neural networks are trained to predict the intermediate results, i.e., the numerator and the denominator of the arctangent function in Eq (2), to obtain a better phase estimate. With the predicted background intensity and the raw fringe pattern, as shown, CNN2 calculates the numerator and denominator terms. Our training dataset is collected from 80 scenes It consists of 960 fringe patterns and the corresponding ground truth data that are obtained by a 12-step PS method Since one of the inputs of CNN2 is the output of CNN1, CNN1 was trained first and CNN2 was trained with the predicted background intensities and captured fringe patterns.

Background image c c
Findings
Method
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call