Abstract

A novel region-based image fusion framework based on multiscale image segmentation and statistical feature extraction is proposed. A dual-tree complex wavelet transform (DT-CWT) and a statistical region merging algorithm are used to produce a region map of the source images. The input images are partitioned into meaningful regions containing salient information via symmetric alpha-stable (S alphaS) distributions. The region features are then modeled using bivariate alpha-stable (B alphaS) distributions, and the statistical measure of similarity between corresponding regions of the source images is calculated as the Kullback-Leibler distance (KLD) between the estimated B alphaS models. Finally, a segmentation-driven approach is used to fuse the images, region by region, in the complex wavelet domain. A novel decision method is introduced by considering the local statistical properties within the regions, which significantly improves the reliability of the feature selection and fusion processes. Simulation results demonstrate that the bivariate alpha-stable model outperforms the univariate alpha-stable and generalized Gaussian densities by not only capturing the heavy-tailed behavior of the subband marginal distribution, but also the strong statistical dependencies between wavelet coefficients at different scales. The experiments show that our algorithm achieves better performance in comparison with previously proposed pixel and region-level fusion approaches in both subjective and objective evaluation tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call