State-of-the-art techniques for vision-based relative navigation rely on images acquired in the visible spectral band. Consequently, the accuracy and robustness of the navigation is strongly influenced by the illumination conditions. The exploitation of thermal-infrared images for navigation purposes is studied in the present paper, as a possible solution to improve navigation in close proximity with a target spacecraft. Thermal-infrared images depend on the thermal radiation emitted by the target, hence, they are independent from light conditions; however, they suffer from a poorer texture and a lower contrast with respect to visible ones. This paper proposes pixel-level image fusion to overcome the limitations of the two types of images. The two source images are merged into a more informative one, retaining the strengths of the distinguished sensing modalities. The contribution of this work is twofold: firstly, a realistic thermal infrared images rendering tool for artificial targets is implemented; secondly, different pixel-level visible-thermal infrared images fusion techniques are assessed through qualitative and quantitative performance metrics to ease and improve the subsequent image processing step. The work presents a comprehensive evaluation of the best fusion techniques for on-board implementation, paving the way to the development of a multispectral end-to-end navigation chain.
Read full abstract