Abstract

Most approaches to recover a scene's 3D depth from a single image often model the point spread function (DSF) as a 2D Gaussian function. However, those methods are suffered from some noises, and difficult to get a high quality of depth recovery. We present a simple yet effective approach to estimate exactly the amount of spatially varying defocus blur at edges, based on the Cauchy distribution model for the DSF. The raw image is re-blurred twice using two known Cauchy distribution kernels, and the defocus blur amount at edges can be derived from the gradient ratio between the two re-blurred images. By propagating the blur amount at edge locations to the entire image using the matting interpolation, a full depth map is then recovered. Experimental results on several real images demonstrate both feasibility and effectiveness of our method, being a non-Gaussian model for DSF, in providing a better estimation of the defocus map from a single un-calibrated defocused image. These results also show that our method is robust to image noises, inaccurate edge location and interferences of neighboring edges. It can generate more accurate scene depth maps than the most of existing methods using a Gaussian based DSF model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call