In millimeter-wave (MMW) imaging, the objects of interest are oftentimes modeled as 2-D binary (black and white) shapes with white pixels representing the reflecting interior of the object. However, due to the propagation of the scattered waves, the continuous-domain binary images are convolved with a so-called point-spread function (PSF) before being digitized by means of sampling. As the 2-D PSF is both nonseparable and nonvanishing in the case of MMW imaging, exact recovery is quite complicated. In this communication, we propose a deep-learning approach for image reconstruction. We should highlight that the wave scatterings are suitably represented with complex-valued quantities, while standard deep neural networks (DNNs) accept real-valued inputs. To overcome this challenge, we separate the real and imaginary parts as if we had two imaging modalities and concatenate them to form a real-valued input with a larger size. Fortunately, the network automatically learns how to combine the mutual information between these modalities to reconstruct the final image. Among the advantages of the proposed method are improved robustness against additive noise and mismatch errors of imaging frequency and object to antenna distance; indeed, the method works well in wideband imaging scenarios over a wide range of objects to antenna distances even in the presence of high noise levels without requiring a separate calibration stage. We test the method with synthetic data simulated with software as well as real recordings in the laboratory.
Read full abstract