Abstract
The depth maps obtained from current LiDAR scans are relatively sparse, while dense depth maps are needed. In this paper we propose a fusion-based RGB image and semantic image to jointly guide the deep completion. The RGB images, semantic images, and sparse depth images are fused after feature extraction, respectively. Specifically, we first perform depth estimation on the RGB image to generate the color depth. Then the color depth is fused with the semantic image to estimate the semantic depth. Finally, the color depth and semantic depth together lead to the complementation of the sparse depth map. To generate more accurate dense depth maps. In this paper, the fusion module (AFF) is added in some branches to fuse the color depth and semantic depth sparse depth maps. Under the test of public dataset KITTI, the experimental results show that the depth map predicted by the proposed depth completion model in this paper is more accurate compared with the backbone of the depth completion method guided by RGB only.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have