Abstract

Multiscale fusion algorithms (wavelet or pyramid) can generally satisfy multisensory image fusion. However, those algorithms are not ideal to fuse visible images and infrared images whose intensities appear inverted. Therefore, a novel orientation-based fusion algorithm is proposed in this paper to address this problem. Specifically, a set of <i>MxN</i> Gabor wavelet transforms (GWT) are performed with two input images (I<sub>A</sub> and I<sub>B</sub>). At each frequency band (<i>b</i> = 1, 2, ..., M), the index of maximal GWT magnitude between two images is selected pixel by pixel; and then two index frequencies, H<sub>A</sub>(<i>b</i>) and H<sub>B</sub>(<i>b</i>), are calculated as its index accumulation along N orientations, respectively. The final H<sub>A</sub> and H<sub>B</sub> are the weighted summations through M bands, where the band weights (W<sub>b</sub>) are given empirically. Eventually, the fused image is computed as I<sub>F</sub> = (I<sub>A</sub> .* H<sub>A</sub> + IB .* H<sub>B</sub>)/( H<sub>A</sub> + H<sub>B</sub>), where '.*' denotes element-by-element product of two arrays. The orientation-based fusion algorithm can be further varied by either keeping DC (direct current) or suppressing DC in GWT. "Keeping DC" will produce a contrast-smooth image; while "suppressing DC" will result a sharpened fusion. Color fusion is achieved by replacing the red channel of a color image with the fused image, which is suitable for poorly illuminated color images. Not only are the fused images of visible and infrared images satisfied, but the fusions of other image sets are also comparable to the results of multiscale fusion algorithms. The proposed algorithm can be applied to multiple (more than two) image fusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call