Abstract

Abstract Two primary areas of research carried out in the aviation industry are improved protection and environmental impact reduction. This research work examines the possibilities of increased knowledge of the situation in avionics systems with computer vision. Methods for image fusion are tested by sufficient pre-processing of three image sensors, one in the visual spectrum and two in the infra-red spectrum. In order to cope with the various weather and operating conditions of an aircraft, the sensor configuration is selected, with an emphasis on the final approach and landing phases. To provide a detailed evaluation of the image quality of the fusion processes, comprehensive image quality assessment metrics derived from a systematic analysis are used. A total of four methods of image fusion are evaluated, two of which are convolutionary network-based, using the networks in the comprehensive layers for feature extraction. Other methods are also tested with visual saliency and sparse representation maps. Results show that a traditional approach implementing a rolling guidance philtre for layer separation and visual saliency map produces the best results with methods implemented in MATLAB. Furthermore, the findings are validated with a subjective rating test, where the image quality of the fusion methods is further assessed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call