Abstract

Earth observation satellites provide data covering different parts of electromagnetic spectrum at different spatial, spectral, and temporal resolutions. To utilize these different types of image data effectively, a number of image fusion techniques have been developed. Image fusion is defined as the set of methods, tools, and means of using data from two or more different images to improve quality of (1). The fused image has rich information that will improve performance of image analysis algorithms. This increase in quality of information leads to better processing (ex: classification, segmentation) accuracies compared to using information from one type of data alone. In this paper pixel level and feature level image fusion are applied for classification of a co-registered QuickBird multispectral and panchromatic images. I. INTRODUCTION A wide spectrum of remotely sensed data like multispectral imagery, radar imagery, hyperspectral imagery, geographical information science (GIS) map data, and Light detection and ranging (LIDAR) data are now available. For many image analysis applications information provided by one imagery type/source is incomplete or insufficient. Additional sources might provide complementary information which help to better characterize observed land cover. Image fusion is used extensively to fuse complementary information from different sensors to provide better understanding of observed earth surface. Image fusion takes place at three different levels: pixel, feature, and decision level (2). In pixel-level fusion, a new image is formed whose pixel values are obtained by combining pixel values of different images through some algorithms. The new image is then used for further processing like feature extraction and classification. In feature-level fusion, features are extracted from different types of images of same geographic area. The extracted features are then classified using statistical or other types of classifiers. In decision-level fusion, images are processed separately. The processed information is then refined by combining information obtained from different sources and differences in information are resolved based on certain decision rules. Figure 1 provides a visual interpretation of different levels of fusion. In this paper pixel level fusion and feature level fusion were used to classify QuickBird multispectral image. QuickBird panchromatic image was used for extracting complimentary spatial information. The classification results are compared that of original multispectral image. II. IMAGE FUSION A. Pixel-level fusion Pansharpening is a pixel level fusion technique used to increase spatial resolution of multispectral image. Pansharpening techniques increase spatial resolution while simultaneously preserving spectral information in multispectral data. Pansharpening is also known as resolution merge, image integration, and multisensor data fusion. Some of applications of pansharpening include improving geometric correction, enhancing certain features not visible in either of single data alone, change detection using temporal data sets, and enhancing classification. Different pansharpening algorithms are discussed in literature. Pohl et al. (2) provided a detailed review of different methods used for pansharpening and need to assess quality of fused image. The Intensity-Hue- Saturation (IHS) transform based sharpening, principal component analysis (PCA) based sharpening, Brovey sharpening, regression model based sharpening, and wavelet transform based sharpening are some of widely used techniques. The IHS and Brovey sharpening techniques provide good spatial quality but poor spectral quality. The PCA based sharpening performs better than IHS and Brovey sharpening. However performance varies with data used. Different wavelet based techniques are available in literature. The wavelet based techniques differ in type of wavelet transform used, mother wavelet used and combination rule used for combining multispectral and panchromatic data. J. Nunez et.al, (3) used 'a trous' wavelet transform to fuse multispectral and panchromatic image. The IHS transform was used to preprocess multispectral data and intensity band was used in fusion process. The details coefficients of panchromatic image were added to multispectral image or some of high frequency details were replaced by corresponding panchromatic details. R.L. King and Jainwen Wang (4) used dyadic discrete wavelet transform (DWT) and biorthogonal 9/7 mother wavelet. The details coefficients of panchromatic image are added to intensity component to enhance spatial resolution. Other wavelet based techniques use redundant discrete wavelet transform (RDWT). The use of RDWT reduces some of artifacts produced by DWT schemes. Most of these techniques are available with commercial remote sensing software packages like ERDAS Imagine ® , ENVI ® , and PCI Geomatica ® .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.