Abstract

The performance of image fusion algorithms is evaluated using image fusion quality metrics and observer performance in identification perception experiments. Image Intensified (I<sup>2</sup>) and LWIR images are used as the inputs to the fusion algorithms. The test subjects are tasked to identify potentially threatening handheld objects in both the original and fused images. The metrics used for evaluation are mutual information (MI), fusion quality index (FQI), weighted fusion quality index (WFQI), and edge-dependent fusion quality index (EDFQI). Some of the fusion algorithms under consideration are based on Peter Burt's Laplacian Pyramid, Toet's Ratio of Low Pass (RoLP or contrast ratio), and Waxman's Opponent Processing. Also considered in this paper are pixel averaging, superposition, multi-scale decomposition, and shift invariant discrete wavelet transform (SIDWT). The fusion algorithms are compared using human performance in an object-identification perception experiment. The observer responses are then compared to the image fusion quality metrics to determine the amount of correlation, if any. The results of the perception test indicated that the opponent processing and ratio of contrast algorithms yielded the greatest observer performance on average. Task difficulty (V<sub>50</sub>) associated with the I<sup>2</sup> and LWIR imagery for each fusion algorithm is also reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call