Abstract

The precise combination of image sensor and micro-lens array enables light-field cameras to record both angular and spatial information of incoming light, therefore, one can calculate disparity and depth from one single light-field image captured by one single light-field camera. In turn, 3D models of the recorded objects can be recovered, which means a 3D measurement system can be built using a light-field camera. However, reflective and texture-less areas in light-field images have complicated conditions, making it hard to correctly calculate disparity with existing algorithms. To tackle this problem, we introduce a novel end-to-end network VommaNet to retrieve multi-scale features from reflective and texture-less regions for accurate disparity estimation. Meanwhile, our network has achieved similar or better performance in other regions for both synthetic light-field images and real-world data compared to the state-of-the-art algorithms.

Highlights

  • With recent developments in lenslet-based light-field camera technology [1], especially those commercially available products from Lytro [2] and Raytrix [3], depth estimation from light-field images has been a niche topic in computer vision

  • We construct a fast and accurate 3D measurement system based on a single light-field camera and propose a new end-to-end network that addresses the problem of light-field disparity estimation in reflective and texture-less areas by enlarging the receptive field in early layers of the network so that it will be able to infer the accurate depth value of these regions from the value of their edges, while maintains a similar or better performance in other regions compared to existing algorithms

  • We propose a fast and accurate 3D measurement system based on a single light-field camera and our newly proposed light-field depth estimation neural network

Read more

Summary

Introduction

With recent developments in lenslet-based light-field camera technology [1], especially those commercially available products from Lytro [2] and Raytrix [3], depth estimation from light-field images has been a niche topic in computer vision. Based on the two-plane parameterization [4], light-field images can be used to generate multi-view images with slightly different view points and refocused images with different focal planes [5] With these advantages, various algorithms [6,7,8] have been developed to estimate depth information from single light-field image. Various algorithms [6,7,8] have been developed to estimate depth information from single light-field image Such depth information, when combined with sophisticated metric calibration techniques [9,10], could generate very dense point clouds as well as corresponding textures. Attempts have been made to recover depth information for these regions with the help of shape-from-shading [15,16,17], but doing so would need prior knowledge of illumination (captured or estimated), and is generally limited to Lambertian surfaces or surfaces with uniform

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call