In this paper, we propose a new approach for multiresolution fusion using contourlet transform (CT). The method is based on modeling the low spatial resolution (LR) and high spectral resolution multispectral (MS) image as the degraded and noisy version of their high spatial resolution version. Since this is an ill-posed problem, it requires regularization in order to obtain the final solution. In this paper, we first obtain the initial estimate of the fused image from the available MS image and the panchromatic (Pan) image by using the CT domain learning. Since CT provides better directional edges, the initial estimate has better edge details. Using the initial estimate, we obtain the degradation that accounts for the aliasing between the LR MS image and fused image. Regularization is carried out by modeling the texture of the final fused image as a homogeneous Markov random field (MRF) prior, where the MRF parameter is estimated using the initial estimate. The use of MRF prior on the final fused image takes care of the spatial dependencies among the pixels. A simple gradient-based optimization technique is used to obtain the final fused image. Although we use homogeneous MRF, the proposed approach preserves the edges in the final fused image by retaining the edges from the initial estimate and by carrying out the optimization on nonedge pixels only. Therefore, the advantage of the proposed method lies in preserving the discontinuities without using the discontinuity preserving prior, thus avoiding the use of computationally taxing optimization techniques for regularization purposes. In addition, the proposed method causes minimum spectral distortion since it learns the texture using contourlet coefficients and does not use actual Pan image pixel intensities. We demonstrate the effectiveness of our approach by conducting the experiments using subsampled and nonsubsampled CT on different data sets captured using Ikonos-2, Quickbird, and Worldview-2 satellites.
Read full abstract