Abstract

The goal of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a new image which is more suitable for human and machine perception or further image-processing task such as segmentation, feature extraction, object detection and target recognition. In this paper we present a new fusion algorithm based on the redundant wavelet transform (RWT). The two source images are firstly decomposed using the RWT, which is shift-invariant. The wavelet coefficients from the approximation plane and wavelet planes are then combined together, and the fused image is reconstructed by the inverse RWT. To evaluate the performance of image fusion results, we investigate both subjective and objective evaluation measures. The experimental results show that the new algorithm outperforms the other conventional methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call