Abstract

In this paper, we propose an advanced three-dimensional visualization method for an integral imaging microscope system to simultaneously improve the resolution and quality of the reconstructed image. The main advance of the proposed method is that it generates a high-quality three-dimensional model without limitation of resolution by combining the high-resolution two-dimensional color image with depth data obtained through a fully convolutional neural network. First, the high-resolution two-dimensional image and an elemental image array for a specimen are captured, and the orthographic-view image is reconstructed from the elemental image array. Then, via a convolutional neural network-based depth estimation after the brightness of input images are uniformed, a more accurate and improved depth image is generated; and the noise of result depth image is filtered. Subsequently, the estimated depth data is combined with the high-resolution two-dimensional image and transformed into a high-quality three-dimensional model. In the experiment, it was confirmed that the displayed high-quality three-dimensional model could be visualized very similarly to the original image.

Highlights

  • Integral imaging is an autostereoscopic three-dimensional (3D) imaging technique that captures and reproduces a full 3D information using a two-dimensional (2D) array of lenses [1]–[8]

  • Even the depth-of-field of imaging microscope (IIM) is wider than optical 2D microscope according to the geometric or wave optics theories [9]–[11], [16], [17], it is quite insufficient for the comfortable viewing

  • We investigated an advanced visualization method for IIM display using a high-quality 3D model generation method based on a convolutional neural network (CNN)-based depth estimation algorithm

Read more

Summary

Introduction

Integral imaging is an autostereoscopic three-dimensional (3D) imaging technique that captures and reproduces a full 3D information using a two-dimensional (2D) array of lenses [1]–[8]. In the case of general integral imaging and related light-field techniques except IIM, the depth information is extracted from the disparity between elemental images [32], [33] Such methods do not require camera calibration; they yield quite accurate depth information. We propose a 3D visualization method for IIM that provides improved resolution and quality for reconstruction, using a convolutional neural network (CNN)-based depth estimation algorithm. To overcome the limitations of IIM systems, such as low resolution and low quality of the reconstruction caused by illumination effects and insufficient parallax disparity, the CNN-based model estimates depth using appropriate pre- and post-processing. When resolution is expanded through the proposed visualization method, the quality of reconstructed 3D model almost isn’t deteriorated; the depth, color, and parallax data are represented simultaneously with much improved resolution and quality, which resolves the limitations of existing IIM systems

Geometrical Analysis for the Disparity Between Elemental Images
CNN-Based Depth Estimation
Proposed IIM System Using CNN-Based Depth Estimation Algorithm
Experimental Results
Evaluations for Output Images
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call