Abstract

The depth maps obtained from current LiDAR scans are relatively sparse, while dense depth maps are needed. In this paper we propose a fusion-based RGB image and semantic image to jointly guide the deep completion. The RGB images, semantic images, and sparse depth images are fused after feature extraction, respectively. Specifically, we first perform depth estimation on the RGB image to generate the color depth. Then the color depth is fused with the semantic image to estimate the semantic depth. Finally, the color depth and semantic depth together lead to the complementation of the sparse depth map. To generate more accurate dense depth maps. In this paper, the fusion module (AFF) is added in some branches to fuse the color depth and semantic depth sparse depth maps. Under the test of public dataset KITTI, the experimental results show that the depth map predicted by the proposed depth completion model in this paper is more accurate compared with the backbone of the depth completion method guided by RGB only.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.