Abstract

State-of-the-art techniques for vision-based relative navigation rely on images acquired in the visible spectral band. Consequently, the accuracy and robustness of the navigation is strongly influenced by the illumination conditions. The exploitation of thermal-infrared images for navigation purposes is studied in the present paper, as a possible solution to improve navigation in close proximity with a target spacecraft. Thermal-infrared images depend on the thermal radiation emitted by the target, hence, they are independent from light conditions; however, they suffer from a poorer texture and a lower contrast with respect to visible ones. This paper proposes pixel-level image fusion to overcome the limitations of the two types of images. The two source images are merged into a more informative one, retaining the strengths of the distinguished sensing modalities. The contribution of this work is twofold: firstly, a realistic thermal infrared images rendering tool for artificial targets is implemented; secondly, different pixel-level visible-thermal infrared images fusion techniques are assessed through qualitative and quantitative performance metrics to ease and improve the subsequent image processing step. The work presents a comprehensive evaluation of the best fusion techniques for on-board implementation, paving the way to the development of a multispectral end-to-end navigation chain.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.