Abstract

Incorporating multi-sensor, in filter-based as well as graph-based simultaneous localization and mapping (SLAM), relies on the uncertainties involved in each measurement. Proper covariance estimation is thus critical to balance confidence levels among sensors. Despite its importance, traditional covariance approximation mostly relied on first order derivative or fixed measurement covariance and therefore tended to be error-prone, and even heuristic. Recently, deep learning for uncertainty estimation yielded meaningful performance, but applied to a single sensor in a supervised manner. Unlike the traditional supervised manner, we introduce an unsupervised loss for uncertainty modeling, to learn uncertainty without needing ground truth covariance as a label. Most important, we overcome the limitation of learning a single sensor's uncertainty by introducing a way of balancing uncertainty between different sensor modalities. In doing so, we alleviate the uncertainty balancing issue between sensors that has often been encountered in the multi-sensor SLAM application. Targeting covariance learning for visual odometry, particularly with regard to the integration of inertial sensors, the proposed uncertainty learning method was validated in a visual-inertial odometry application over the public datasets under artificial visual and inertial degradations to mimic harsh environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call