Interpreting remote sensing images by combining manual visual interpretation and computer automatic classification and recognition is an important application of human–computer interaction (HCI) in the field of remote sensing. Remote sensing images with high spatial resolution and high spectral resolution is an important basis for automatic classification and recognition. However, such images are often difficult to obtain directly. In order to solve the problem, a novel pan-sharpening method via multi-scale and multiple deep neural networks is presented. First, the non-subsampled contourlet transform (NSCT) is employed to decompose the high resolution (HR)/low resolution (LR) panchromatic (PAN) images into the high frequency (HF)/low frequency (LF) images, respectively. For pan-sharpening, the training sets are only sampled from the HF images. Then, the DNN is utilized to learn the feature of the HF images in different directions of HR/LR PAN images, which is trained by the image patch pair sampled from HF images of HR/LR PAN images. Moreover, in the fusion stage, NSCT is also employed to decompose the principal component of initially amplified LR multispectral (MS) image obtained by the transformation of adaptive PCA (A-PCA). The HF image patches of LR MS, as the input data of the trained DNN, go through forward propagation to obtain the output HR MS image. Finally, the output HF sub-band images and the original LF sub-band images of LR MS image fuse into a new sub-band set. The inverse transformations of NSCT and A-PCA , residual compensation are conducted to obtain the pan-sharpened HR MS. The experimental results show that our method is better than other well-known pan-sharpening methods.
Read full abstract