Abstract

In this letter, we propose an adaptive cost volume fusion algorithm for multi-modal depth estimation in changing environments. Our method takes measurements from multi-modal sensors to exploit their complementary characteristics and generates depth cues from each modality in the form of adaptive cost volumes using deep neural networks. The proposed adaptive cost volume considers sensor configurations and computational costs to resolve an imbalanced and redundant depth bases problem of conventional cost volumes. We further extend its role to a generalized depth representation and propose a geometry-aware cost fusion algorithm. Our unified and geometrically consistent depth representation leads to an accurate and efficient multi-modal sensor fusion, which is crucial for robustness to changing environments. To validate the proposed framework, we introduce a new multi-modal depth in changing environments (MMDCE) dataset. The dataset was collected by our own vehicular system with RGB, NIR, and LiDAR sensors in changing environments. Experimental results demonstrate that our method is robust, accurate, and reliable in changing environments. Our codes and dataset are available at our project page.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.