Abstract

Depth estimation has received considerable attention and is often applied to visual simultaneous localization and mapping (SLAM) for scene reconstruction. At least to our knowledge, sufficiently reliable depth always fails to be provided for monocular depth estimation-based SLAM because new image features are rarely re-exploited effectively, local features are easily lost, and relative depth relationships among depth pixels are readily ignored in previous depth estimation methods. Based on inaccurate monocular depth estimation, SLAM still faces scale ambiguity problems. To accurately achieve scene reconstruction based on monocular depth estimation, this paper makes three contributions. (1) We design a depth estimation model (DEM), consisting of a precise encoder to re-exploit new features and a decoder to learn local features effectively. (2) We propose a loss function using the depth relationship of pixels to guide the training of DEM. (3) We design a modular SLAM system containing DEM, feature detection, descriptor computation, feature matching, pose prediction, keyframe extraction, loop closure detection, and pose-graph optimization for pixel-level scene reconstruction. Extensive experiments demonstrate that the DEM and DEM-based SLAM are effective. (1) Our DEM predicts more reliable depth than the state of the arts when inputs are RGB images, sparse depth, or the fusion of both on public datasets. (2) The DEM-based SLAM system achieves comparable accuracy as compared with well-known modular SLAM systems.

Highlights

  • Considered as an important computer vision topic, depth estimation focuses on predicting depth from RGB images, sparse depth, or the fusion of both (RGBd) [1]

  • (2) We propose a loss function using the depth relationship of pixels to guide the training of depth estimation model (DEM). (3) We design a modular simultaneous localization and mapping (SLAM) system containing DEM, feature detection, descriptor computation, feature matching, pose prediction, keyframe extraction, loop closure detection, and pose-graph optimization for pixel-level scene reconstruction

  • EVALUATING ENCODERS, DECODERS, AND LOSS FUNCTIONS we demonstrate that our encoder, decoder, and loss function are effective

Read more

Summary

Introduction

Considered as an important computer vision topic, depth estimation focuses on predicting depth from RGB images (monocular images), sparse depth, or the fusion of both (RGBd) [1]. Most previous work uses convolutional neural networks (CNNs) to predict depth from sparse depth [3], monocular RGB images [1], [4], [5], [6], or the fusion of both [7], [8]. These prior methods tend to leverage CNN models like ResNets [9] to learn visual features, such as the research [5]–[8].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call