Abstract

Phase unwrapping is an ill-posed classical problem in many practical applications of significance such as 3D profiling through fringe projection, synthetic aperture radar and magnetic resonance imaging. Conventional phase unwrapping techniques estimate the phase either by integrating through the confined path (referred to as path-following methods) or by minimizing the energy function between the wrapped phase and the approximated true phase (referred to as minimum-norm approaches). However, these conventional methods have some critical challenges like error accumulation and high computational time and often fail under low SNR conditions. To address these problems, this paper proposes a novel deep learning framework for unwrapping the phase and is referred to as “PhaseNet 2.0”. The phase unwrapping problem is formulated as a dense classification problem and a fully convolutional DenseNet based neural network is trained to predict the wrap-count at each pixel from the wrapped phase maps. To train this network, we simulate arbitrary shapes and propose new loss function that integrates the residues by minimizing the difference of gradients and also uses $L_{1}$ loss to overcome class imbalance problem. The proposed method, unlike our previous approach PhaseNet, does not require post-processing, highly robust to noise, accurately unwraps the phase even at the severe noise level of −5 dB, and can unwrap the phase maps even at relatively high dynamic ranges. Simulation results from the proposed framework are compared with different classes of existing phase unwrapping methods for varying SNR values and discontinuity, and these evaluations demonstrate the advantages of the proposed framework. We also demonstrate the generality of the proposed method on 3D reconstruction of synthetic CAD models that have diverse structures and finer geometric variations. Finally, the proposed method is applied to real-data for 3D profiling of objects using fringe projection technique and digital holographic interferometry. The proposed framework achieves significant improvements over existing methods while being highly efficient with interactive frame-rates on modern GPUs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call