Abstract

Multispectral images usually present complimentary information such as visual-band imagery and infrared imagery (near infrared or long wave infrared). There is strong evidence that the fused multispectral imagery increases the reliability of interpretation (Rogers & Wood, 1990; Essock et al., 2001); whereas the colorized multispectral imagery improves observer performance and reaction times (Toet et al. 1997; Varga, 1999; Waxman et al., 1996). A fused image in grayscale can be automatically analyzed by computers (for target recognition); while a colorized image in color can be easily interpreted by human users (for visual analysis). Imagine a nighttime navigation task that may be executed by an aircraft equipped with a multisenor imaging system. Analyzing the combined or synthesized multisensory data will be more convenient and more efficient than simultaneously monitoring multispectral images such as visual-band imagery (e.g., image intensified, II), near infrared (NIR) imagery, and infrared (IR) imagery. In this chapter, we will discuss how to synthesize the multisensory data using image fusion and night vision colorization techniques in order to improve the effectiveness and utility of multisensor imagery. It is anticipated that the successful applications of such an image synthesis approach will lead to improved performance of remote sensing, nighttime navigation, target detection, and situational awareness. This image synthesis approach involves two main techniques, image fusion and night vision colorization, which is reviewed as follows, respectively. Image fusion combines multiple-source imagery by integrating complementary data in order to enhance the information apparent in the respective source images, as well as to increase the reliability of interpretation. This results in more accurate data (Keys et al., 1990) and increased utility (Rogers & Wood, 1990; Essock et al., 1999). In addition, it has been reported that fused data provides far more robust aspects of operational performance such as increased confidence, reduced ambiguity, improved reliability and improved classification (Rogers & Wood, 1990; Essock et al., 2001). A general framework of image fusion can be found in Reference (Pohl & Genderen, 1998). In this chapter, our discussions focus on pixellevel image fusion. A quantitative evaluation of fused image quality is important for an objective comparison between the respective fusion algorithms, which measures the amount of useful information and the amount of artifacts introduced in the fused image. 21

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.