Abstract
Depth completion is to predict a dense depth image from a raw sparse depth image with missing values, which is an important yet challenging problem in a myriad of vision, robotics, and multimedia applications. While previous studies have made substantial progress in this issue, most of them directly fuse RGB and depth features without considering the multiple latent cues in data. In this paper, we propose a multi-cue guidance network model for depth completion, which introduces multi-cue features to guide the regression of residual values. This network includes two major parts: multi-cue guidance structure and residual regression structure. The multi-cue guidance structure composed of two parallel convolutional sampling streams extracts features from the raw depth image and the corresponding RGB image, respectively. The extracted features are used as multi-cue guidance to combine with the depth image to feed the residual regression structure, where the estimated residual values are combined with the raw depth values to output the final dense depth values. By virtue of the multi-cue guidance and residual regression, the proposed model can leverage the multiple latent cues in the data to predict more accurate depth values. The proposed method was tested on two challenging datasets of outdoor and indoor scenes: KITTI benchmark and NYUv2 dataset. The experimental results show that the proposed method outperforms the comparison methods. Extensive ablation studies demonstrate the effectiveness of different modules in the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.