Abstract

Depth estimation for light field images is essential for applications such as light field image compression, reconstructing perspective views and 3D reconstruction. Previous depth map estimation approaches do not capture sharp transitions around object boundaries due to occlusions, making many of the current approaches unreliable at depth discontinuities. This is especially the case for light field images because the pixels do not exhibit photo-consistency in the presence of occlusions. In this paper, we propose an algorithm to estimate the depth map for light field images using depth from defocus. Our approach uses a small patch size of pixels in each focal stack image for comparing defocus cues, allowing the algorithm to generate sharper depth boundaries. Then, in contrast to existing approaches that use defocus cues for depth estimation, we use frequency domain analysis image similarity checking to generate the depth map. Processing in the frequency domain reduces the individual pixel errors that occur while directly comparing RGB images, making the algorithm more resilient to noise. The algorithm has been evaluated on both a synthetic image dataset and real-world images in the JPEG dataset. Experimental results demonstrate that our proposed algorithm outperforms state-of-the-art depth estimation techniques for light field images, particularly in case of noisy images.

Highlights

  • Depth maps from light field images are essential for light field image compression techniques, reconstructing views from a sparse set of perspective views, increasing the number of perspective views and 3D reconstruction

  • The dataset is widely used for validations of depth estimation algorithms for light field images as it contains ground-truth disparity and depth maps

  • We proposed a depth estimation algorithm that works on the concept of depth from defocus

Read more

Summary

Introduction

Conventional photography is only able to capture limited information from the light passing through the camera lens. Cameras record the sum of the intensities of light rays striking each point in the image and not the total amount of incident light traveling along different rays that contribute to the image [13]. Light field imaging technology captures rich visual information by representing the distribution of light in free space [14], which means that a light field image captures the pixel intensity and the direction of the incident light. The additional dimensions of data captured enables the generation of images at different focal lengths and extended depth of field using ray-tracing techniques. This allows for image manipulation in a more flexible way [15]. Rearranging the rays estimates where the light rays would terminate if the camera was focused on the desired depth [17]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call