Abstract

Estimating depth from a single RGB image is an ill-posed and inherently ambiguous problem. State-of-the-art deep learning methods can now estimate accurate 2D depth maps, but when the maps are projected into 3D, they lack local detail and are often highly distorted. We propose a fast-to-train two-streamed CNN that predicts depth and depth gradients, which are then fused together into an accurate and detailed depth map. To overcome the challenge of learning from limited sized datasets, we define a novel set loss over multiple images. By regularizing the estimation between a common set of images, the network is less prone to over-fitting and achieves better accuracy than competing methods. Our method is applicable to both entire scenes and individual objects and we demonstrate this by evaluating on the NYU Depth v2 and ScanNet datasets for indoor scenes and on the ShapeNet dataset for single man-made objects. Experiments shows that our depth predictions are competitive with state-of-the-art and lead to faithful 3D projections rich in detailing and structure.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.