Abstract

Depth and visual odometry estimation are two essential parts in SLAM systems. Compared with traditional algorithms, supervised learning methods have shown promising results in single view depth estimation and visual odometry estimation. However, they require large amounts of labeled data. Recently, some unsupervised approaches to estimate depth and odometry via minimizing photometric error draw great attention. In this paper, we present a novel approach to learn depth and odometry via unsupervised learning. Our method ameliorates the original photometric loss to enhance the robustness to illumination change in real scenarios. In addition, we propose a new structure of Pose-net and Explainability-net to achieve rotation-sensitive odometry results and more accurate explainability masks. The experimental results have demonstrated that our approach achieves better performance than existing unsupervised methods in both depth and odometry results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call