Abstract

The terrain-relative navigation (TRN) method is often used in entry, descent and landing (EDL) systems for position estimation and navigation of spacecraft. In contrast to the crater detection method, the image patch matching method does not depend on the integrity of the database and the saliency of the crater features. However, there are four difficulties associated with lunar images: illumination transformation, perspective transformation, resolution mismatch, and the lack of texture. Deep learning offers possible solutions. In this paper, an L2-normed attention and multi-scale fusion network (L2AMF-Net) was proposed for patch descriptor learning to effectively overcome the above four difficulties and achieve lunar image patch matching accurately and robustly. On the one hand, an L2-Attention unit (LAU) was proposed to generate attention score maps in spatial and channel dimensions and enhance feature extraction. On the other hand, a multi-scale feature self and fusion enhance structure (SFES) was proposed to fuse multi-scale features and enhance the feature representations. L2AMF-Net achieved a 95.57% matching accuracy and excellent performance compared with several other methods in lunar image patch dataset generated in this paper. Experiments verified the illumination, perspective and texture robustness of L2AMF-Net and the validity of the attention module and feature fusion structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call