Abstract

Accurate localization is critical for lunar rovers exploring lunar terrain features. Traditionally, lunar rover localization relies on sensor data from odometers, inertial measurement units and stereo cameras. However, localization errors accumulate over long traverses, limiting the rover’s localization accuracy. This paper presents a metric localization framework based on cross-view images (ground view from a rover and air view from an orbiter) to eliminate accumulated localization errors. First, we employ perspective projection to reduce the geometric differences in cross-view images. Then, we propose an image-based metric localization network to extract image features and generate a location heatmap. This heatmap serves as the basis for accurate estimation of query locations. We also create the first large-area lunar cross-view image (Lunar-CV) dataset to evaluate the localization performance. This dataset consists of 30 digital orthophoto maps (DOMs) with a resolution of 7 m/pix, collected by the Chang’e-2 lunar orbiter, along with 8100 simulated rover panoramas. Experimental results on the Lunar-CV dataset demonstrate the superior performance of our proposed framework. Compared to the second best method, our method significantly reduces the average localization error by 26% and the median localization error by 22%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call