Abstract

<p indent="0mm">In Chang’E-4 lunar exploration mission, Yutu 2 rover tasks are planned and controlled by teleoperation. The rover usually travels between 6 and 9 m to cover a large distance for each planned travel; as a result, the images taken by the Yutu 2 rover differ saliently in scale, rotation, and translation. Consequently, the overlap areas of these images have small sizes, especially when the resolution and morphology of the overlap areas are very different, thereby resulting in image matching difficulties to image matching, and hindering automatic visual positioning between adjacent stations. In this paper, a general pixel-spatial resolution model focusing on lunar images is proposed by introducing the inclination angle to existing models. We further discuss in detail the variation of image scales and the rate of feature matching in terms of station distances, imaging distances, and lunar surface inclination angles or their combinations, and develop the mechanism to choose the equivalent minimal factor subset. Finally, the proposed model is evaluated by using the station data of the Yutu 2 rover collected on the first and second lunar days. Our analysis classified the influence of the pixel-spatial resolution ratio on the rate of image matching into four types. Thus, a general rule was given for describing the image matching rate under different stations, and the constraints to determine the distance and location of adjacent stations in actual tasks were presented. The proposed approach exhibits the potential to be utilized for improving the automation of localization in subsequent lunar explorations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call