Abstract

3D reconstruction and understanding from monocular camera is a key issue in computer vision. Recent learning-based approaches, especially multi-task learning, significantly achieve the performance of the related tasks. However a few works still have limitation in drawing loss-spatial-aware information. In this paper, we propose a novel Joint-confidence-guided network (JCNet) to simultaneously predict depth, semantic labels, surface normal, and joint confidence map for corresponding loss functions. In details, we design a Joint Confidence Fusion and Refinement (JCFR) module to achieve multi-task feature fusion in the unified independent space, which can also absorb the geometric-semantic structure feature in the joint confidence map. We use confidence-guided uncertainty generated by the joint confidence map to supervise the multi-task prediction across the spatial and channel dimensions. To alleviate the training attention imbalance among different loss functions or spatial regions, the Stochastic Trust Mechanism (STM) is designed to stochastically modify the elements of joint confidence map in the training phase. Finally, we design a calibrating operation to alternately optimize the joint confidence branch and the other parts of JCNet to avoid overfiting. The proposed methods achieve state-of-the-art performance in both geometric-semantic prediction and uncertainty estimation on NYU-Depth V2 and Cityscapes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call