Abstract

The integral imaging system has received considerable research attention because it can be applied to real-time three-dimensional image displays with a continuous view angle without supplementary devices. Most previous approaches place a physical micro-lens array in front of the image, where each lens looks different depending on the viewing angle. A computational integral imaging system with a virtual micro-lens arrays has been proposed in order to provide flexibility for users to change micro-lens arrays and focal length while reducing distortions due to physical mismatches with the lens arrays. However, computational integral imaging methods only represent part of the whole image because the size of virtual lens arrays is much smaller than the given large-scale images when dealing with large-scale images. As a result, the previous approaches produce sub-aperture images with a small field of view and need additional devices for depth information to apply to integral imaging pickup systems. In this paper, we present a single image-based computational RGB-D integral imaging pickup system for a large field of view in real time. The proposed system comprises three steps: deep learning-based automatic depth map estimation from an RGB input image without the help of an additional device, a hierarchical integral imaging system for a large field of view in real time, and post-processing for optimized visualization of the failed pickup area using an inpainting method. Quantitative and qualitative experimental results verify the proposed approach’s robustness.

Highlights

  • The integral imaging system has played an important role in the field of threedimensional (3D) displays, creating a light field using two-dimensional (2D) micro-lens arrays

  • Since integral imaging systems are affected by the micro-lens array size, number of micro-lenses in the lens array, or input image resolution, resultant sub-aperture images have low resolution and small FOV when applying the system to large-scale images, such as high-resolution or panorama images, where a sub-aperture image can be obtained by transposing the pixels in each elemental image

  • To overcome the major limitations of the previous physical and computational integral imaging methodology, as well as to provide convenience for users to generate large elemental image arrays without supplementary devices, we propose a large-FOV integral imaging pickup system from a single image in real time

Read more

Summary

Introduction

The integral imaging system has played an important role in the field of threedimensional (3D) displays, creating a light field using two-dimensional (2D) micro-lens arrays. More complex and larger micro-lens arrays are needed to generate a desirable elemental image array from a large-scale object without distortion. To tackle these limitations, some approaches used pixel mapping [3,4,5,6,7]. To overcome the major limitations of the previous physical and computational integral imaging methodology, as well as to provide convenience for users to generate large elemental image arrays without supplementary devices, we propose a large-FOV integral imaging pickup system from a single image in real time.

Large-Field-of-View Rgb-D Integral Imaging System
Multi-View Attention Module-Based Monocular Depth Map Estimation
Hierarchical Integral RGB-D Imaging System
Multiple Shift-Lens Array Manipulation Process
Sub-Integral Imaging Pickup Process
Postprocessing to Eliminate Failed Pickup Areas
A: Sub integral imaging pickup process function
Qualitative and Qualitative Analysis
Method
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call