Ptychography is an imaging technique that uses the redundancy of information generated by the overlapping of adjacent light regions to calculate the relative phase of adjacent regions and reconstruct the image. In the terahertz domain, in order to make the ptychography technology better serve engineering applications, we propose a set of deep learning terahertz ptychography system that is easier to realize in engineering and plays an outstanding role. To address this issue, we propose to use a powerful deep blind degradation model which uses isotropic and anisotropic Gaussian kernels for random blurring, chooses the downsampling modes from nearest interpolation, bilinear interpolation, bicubic interpolation and down-up-sampling method, and introduces Gaussian noise, JPEG compression noise, and processed detector noise. Additionally, a random shuffle strategy is used to further expand the degradation space of the image. Using paired low/high resolution images generated by the deep blind degradation model, we trained a multi-layer residual network with residual scaling parameters and dense connection structure to achieve the neural network super-resolution of terahertz ptychography for the first time. We use two representative neural networks, SwinIR and RealESRGAN, to compare with our model. Experimental result shows that the proposed method achieved better accuracy and visual improvement than other terahertz ptychographic image super-resolution algorithms. Further quantitative calculation proved that our method has significant advantages in terahertz ptychographic image super-resolution, achieving a resolution of 33.09 dB on the peak signal-to-noise ratio (PSNR) index and 3.05 on the naturalness image quality estimator (NIQE) index. This efficient and engineered approach fills the gap in the improvement of terahertz ptychography by using neural networks.
Read full abstract