Abstract

Using deep convolutional neural networks (CNN) to predict the depth from a single image has received considerable attention in recent years due to its impressive performance. However, existing methods process each single image independently without leveraging the multiview information of video sequences in practical scenarios. Properly taking into account multiview information in video sequences beyond individual frames could offer considerable benefits in terms of depth prediction accuracy and robustness. In addition, a meaningful measure of prediction uncertainty is essential for decision making, which is not provided in existing methods. This paper presents a novel video-based depth prediction system based on a monocular camera, named Bayesian DeNet . Specifically, Bayesian DeNet consists of a 59-layer CNN that can concurrently output a depth map and an uncertainty map for each video frame. Each pixel in an uncertainty map indicates the error variance of the corresponding depth estimate. Depth estimates and uncertainties of previous frames are propagated to the current frame based on the tracked camera pose, yielding multiple depth/uncertainty hypotheses for the current frame which are then fused in a Bayesian inference framework for greater accuracy and robustness. Extensive exper-iments on three public datasets demonstrate that our Bayesian DeNet outperforms the state-of-the-art methods for monocular depth prediction. A demo video and code are publicly available.1

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.