Abstract

Thermal cameras and image intensifiers are common night vision (NV) cameras, which enable operations during night and in adverse weather conditions. NV cameras deliver monochrome images that are usually hard to interpret and may give rise to visual illusions and loss of situational awareness. The two most common NV imaging systems display either emitted infrared radiation or reflected low level light (LLL). In this way the different imaging modalities give complementary information about the objects or area under inspection. Thus, techniques for fusing infrared and LLL images should be employed in order to provide a compact representation of the scene with increased interpretation capabilities. Image fusion can be classified into two types based on pixel-level: pixel-based and regionbased. The pixel-based image fusion is characterized by simplicity and highest popularity. Because pixel-based methods fail to take into account the relationship between points and points, the fused image with either of them might lose some gray and feature information. However, for most image fusion applications, it seems more meaningful to combine objects rather than pixels. The region-based fusion, on the contrary, can obtain the best fusion results by considering the nature of points in each region altogether. Therefore, region-based fusion has advantages over the other two counterparts. At present, region-based methods use some segmentation algorithm to separate an original image into different regions, and then design different rules for different regions. During the last decade, a number of gray fusion algorithms have been proposed, and the fusion methods based on the multiscale transform (MST) are the most typical. The commonly used MST tools include the Laplacian pyramid and the wavelet transform (DWT). In general, due to the perfect properties of the DWT such as multi-resolution, spatial and frequency localization, and direction, the DWT-based methods are superior to the pyramid-based methods. However, the DWT also has some limitations such as limited directions and non-optimal-sparse representation of images. Thus, some artifacts are easily introduced into the fused images using the DWT-based methods, which will reduce the quality of the resultant image consequently. The Dual-Tree Complex Wavelet Transform (DT-CWT) has been introduced by Nick Kingsbury, which has the following properties: Approximate shift invariance; Good directional selectivity in 2-D with Gabor-like filters also

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call