Abstract

Depth estimation plays an important role in many computer vision and computer graphics applications. Existing depth measurement techniques are still complex and restrictive. In this paper, we present a novel technique for inferring depth measurements via depth from defocus using active quasi-random point projection patterns. A quasi-random point projection pattern is projected onto the scene of interest, and each projection point in the image captured by a cellphone camera is analyzed using a deep learning model to estimate the depth at that point. The proposed method has a relatively simple setup, consisting of a camera and a projector, and enables depth inference from a single capture. We evaluate the proposed method both quantitatively and qualitatively and demonstrate strong potential for simple and efficient depth sensing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call