Abstract

Depth completion aims to predict a dense depth map from a sparse one. Benefiting from the powerful ability of convolutional neural networks, recent depth completion methods have achieved remarkable performance. However, it is still a challenging problem to well preserve accurate depth structures, such as tiny structures and object boundaries. To tackle this problem, we propose a structure preserving network (SPNet) in this paper. Firstly, an efficient multi-scale gradient extractor (MSGE) is proposed to extract useful multi-scale gradient images, which contain rich structural information that is helpful in recovering accurate depth. The MSGE is constructed based on the proposed semi-fixed depthwise separable convolution. Meanwhile, we adopt a stable gradient MAE loss (LGMAE) to provide additional depth gradient constrain for better structure reconstruction. Moreover, a multi-level feature fusion module (MFFM) is proposed to adaptively fuse the spatial details from low-level encoder and the semantic information from high-level decoder, which will incorporate more structural details into the depth modality. As demonstrated by experiments on NYUv2 and KITTI datasets, our method outperforms some state-of-the-art methods in terms of both quantitative and quantitative evaluations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.