Abstract

In the field of image fusion, multi-source image fusion methods based on the pixel level can be classified according to two categories: image fusion based on the spatial domain and on the transform domain. When the coefficients of a fusion image are combined, we usually apply a fusion rule based on pixels or windows, whereas the local features in the image are usually not represented. To resolve this problem, we propose a fusion method based on the discrete wavelet frame transform and regional characteristics. First, the transform coefficients are obtained for the two source images using the discrete wavelet frame transform. The average image is acquired after averaging the transform coefficients, which can roughly represent the features of these two source images. The average image is then segmented based on region features and the region coordinates obtained by segmentation are mapped onto the coefficients of the source images obtained by the discrete wavelet frame transform. Finally, the coefficients of each region can be combined using the specific fusion rules. Our experimental results demonstrate that the proposed method performs better compared with the fusion method based on the Laplacian pyramid transform and that based on the shiftShift-invariant discrete wavelet transform at preserving the regional features of source images, as well as delivering better performance in terms of both visuals effects and an objective index.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call