Abstract

Xin Tian Doctor Xin Tian from Wuhan University, China, talks to Electronics Letters about the paper ‘Fourier ptychographic reconstruction using mixed Gaussian-Poisson likelihood with total variation regularisation’, page 1041. My main research fields include computational imaging, image super-resolution, image compression, and convex optimisation theory. My PhD research was on image super-resolution, the target of which was to improve the image's optical resolution by using image processing techniques. However, this improvement was limited. In recent years, the optical resolution limit could be broken through by incorporating advanced computational imaging techniques such as Fourier ptychography (FP). This is a promising research field with wide application prospects. I am interested in the possible applications for biomedicine and remote sensing. Imaging is one of the most important observation measurements in the above applications. Therefore, we believe that the breakthrough of image resolution limitation will have significant impacts on these applications. FP is a new computational imaging technique that collects a series of low-resolution images under different angles of illumination from an array of light-emitting diodes. Then, a wide-field and high-resolution image is reconstructed from these images using an iterative phase retrieval algorithm. In this way, a resolution beyond the optical limitation due to the numerical aperture of the objective can be realised. The advantages of this method include improving the depth of focus, working distance, and size of the field of view, which together lower the required numerical aperture. In addition, this method also enables the numerical correction of lens aberrations, which leads to a large space-bandwidth product. These advantages are the reason this has recently become one of the most attractive areas of research in computational imaging and microscopy. In practice, however, most reconstruction methods are sensitive to noise which degrades the quality of the reconstructed high-resolution image. Therefore, how to generate high quality high-resolution image in FP is the issue to be addressed in this work. In our Letter, we introduce total variation (TV) regularisation into the reconstruction of FP. The main advantage of TV regularisation is the ability to preserve image quality while removing noise. In addition, the mixed Gaussian–Poisson likelihood is adopted as a data fidelity term, which keeps the reconstructed image close to the captured data. The augmented Lagrangian method with the Wirtinger derivatives is used to solve the optimisation problem. Both synthetic examples and real captured data are used to test the efficiency of the method. Based on the experimental results, we conclude that the proposed method outperforms other methods, especially in high noise level situations. In FP, noise can have great impact on the reconstructed image quality. It is important to design a proper reconstruction method for FP by incorporating novel and advanced image denoising methods. With the development of sparse coding theory, it has been demonstrated that total variation (TV) based image denoising method is feasible and effective. This inspired me to integrate TV into the reconstruction process of FP. The main challenge is how to construct the proper imaging model with the TV constraints. How to solve the optimisation problem by using the convex optimisation theory is also an important part in this work. In the short term, this work can improve the reconstruction image quality of FP, with the result that the applications of FP will be promoted. As these discoveries are also suitable for other computational imaging techniques, it may bring new motivations or strategies to solve different kinds of computational imaging problems belonging to other research fields in the longer term. In the future, I will study the influence of pupil function for the reconstruction of FP. Meanwhile, how to use the deep learning technique in this field may be also one of my future works. At present, my other work in computational imaging is single photon imaging. This research is also prospective, which can be used to detect objects in extremely low-light conditions. One key issue in it that is similar to FP, is that noise is large in low-light conditions, which degrades the image quality. Therefore, how to generate high-quality single photon images in the noise situation is very important for this work. In recent years, FP has attracted a lot of research interests and it has developed very fast. There are many new computation imaging techniques based on FP, such as three-dimensional FP, colourful FP, and so on. Deep learning is another frontier technology which is a machine learning sub-field. It could generate computer programs or computational systems which can discover data patterns or structures in data itself. This implies that, with the help of deep learning techniques, researchers could design high-performance reconstruction algorithms for FP or other computational imaging techniques in the next ten years.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.