Two-photon fluorescence microscopy has enabled the three-dimensional (3D) neural imaging of deep cortical regions. While it can capture the detailed neural structures in the x–y image space, the image quality along the depth direction is lower because of lens blur, which often makes it difficult to identify the neural connectivity. To address this problem, we propose a novel approach for restoring the isotropic image volume by estimating and fusing the intersection regions of the images captured from three orthogonal viewpoints using convolutional neural networks (CNNs). Because convolution on 3D images is computationally complex, the proposed method takes the form of cascaded CNN models consisting of rigid transformation, dense registration, and deblurring networks for more efficient processing. In addition, to enable self-supervised learning, we trained the CNN models with simulated synthetic images by considering the distortions of the microscopic imaging process. Through extensive experiments, the proposed method achieved substantial image quality improvements.
Read full abstract