Abstract

A thermal camera can robustly capture thermal radiation images under harsh light conditions such as night scenes, tunnels, and disaster scenarios. However, despite this advantage, neither depth nor ego-motion estimation research for the thermal camera have not been actively explored so far. In this paper, we propose a self-supervised learning method for depth and ego-motion estimation from thermal images. The proposed method exploits multi-spectral consistency that consists of temperature and photometric consistency loss. The temperature consistency loss provides a fundamental self-supervisory signal by reconstructing clipped and colorized thermal images. Additionally, we design a differentiable forward warping module that can transform the coordinate system of the estimated depth map and relative pose from thermal camera to visible camera. Based on the proposed module, the photometric consistency loss can provide complementary self-supervision to networks. Networks trained with the proposed method robustly estimate the depth and pose from monocular thermal video under low-light and even zero-light conditions. To the best of our knowledge, this is the first work to simultaneously estimate both depth and ego-motion from monocular thermal video in a self-supervised manner.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.