Abstract

Light field cameras capture a scene's multi-directional light field with one image, allowing the estimation of depth. In this paper, we introduce a fully automatic method for depth estimation from a single plenoptic image running a RANSAC-like algorithm for feature matching. The novelty about our method is the global method to back project correspondences found using photometric similarity to obtain a 3D virtual point cloud and different methods to build a depth map from the 3D point cloud generated. We use lenses with different focal-lengths in a multiple depth map refining phase, generating a dense depth map. Tests with simulations and real images are presented and compared with the state of the art, showing comparable accuracy for substantial less computational time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call