Abstract
Depth prediction from single image is a challenging task due to the intra scale ambiguity and unavailability of prior information. The prediction of an unambiguous depth from single RGB image is very important aspect for computer vision applications. In this paper, an end-to-end sparse-to-dense network (S2DNet) is proposed for single image depth estimation (SIDE). The proposed network processes single image along with the additional sparse depth samples for depth estimation. The additional sparse depth sample are acquired either with a low-resolution depth sensor or calculated by visual simultaneous localization and mapping (SLAM) algorithms. In first stage, the proposed S2DNet estimates coarse-level depth map using sparse-to-dense coarse network (S2DCNet). In second stage, the estimated coarse-level depth map is concatenated with the input image and used as an input to the sparse-to-dense fine network (S2DFNet) for fine-level depth map estimation. The proposed S2DFNet comprises of attention map architecture which helps to estimate the prominent depth information. The quantitative and qualitative performance evaluation of the proposed network has been carried out using the error metrics. We perform complete evaluation of S2DNet on four publically available benchmark data sets i.e. NYU Depth-V2 indoor dataset [1] , KITTI odometry outdoor dataset [2] , KITTI depth completion test database [3] and SUN-RGB database [4] . Further, we have extended the proposed S2DNet for image de-hazing. The experimental analysis shows that the proposed S2DNet outperforms the existing state-of-the-art methods for both single image depth estimation and image de-hazing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.