Abstract
Abstract Aircraft landing, especially under conditions of low visibility, often requires specialized equipment for assistance. However, due to high construction and maintenance costs associated with traditional ground-based landing assistance, there is a growing interest in using onboard sensors for navigation as an alternative. This paper addresses the scenario of aircraft landing under low visibility conditions with onboard visible and infrared sensors, enhancing visible images with infrared images for improving visibility, then use enhanced images for runway detection to guide visual navigation. There are mainly two significant challenges: multimodal sensor image registration and detailed runway detection. Due to the different installation positions of sensors, the images captured by different sensors need pixel-level alignment, but multimodal sensor images often contain many distinct features which makes it challenging to find their accurate correspondences. Additionally, simply identifying the runway area is not enough, detailed detection of runway lines is also necessary. To address these issues, this paper proposes a framework which begins with a registration network embedded a plug-and-play modality transfer module to improve multimodal registration performance. A fusion method is then used to enhance ready-aligned images, followed by a two-stage detection strategy to detect lines of runway at various landing distance. Experimental results show that the enhanced images from different sensors with detailed detection can effectively improve detection accuracy.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have