Abstract

All the commercial satellites (SPOT, LANDSAT, IRS, IKONOS, Quickbird and Orbview) collect a high spatial resolution panchromatic image and multiple (usually four) multispectral images with significant lower spatial resolution. The PAN images are characterised by a very high spatial information content well-suited for intermediate scale mapping applications and urban analysis. The multispectral images provide the essential spectral information for smaller scale thematic mapping applications such as landuse surveys. Why don't most satellites collect high-resolution MS images directly, to meet this requirement for high-spatial and high-spectral resolutions? There is a limitation to the data volume that a satellite sensor can store on board and then transmit to ground receiving station. Usually the size of the panchromatic image is many times larger than the size of the multispectral images. The size of the panchromatic of Landsat ETM+ is four times greater than the size of a ETM+ multispectral image. The panchromatic image for IKONOS, Quickbird SPOT5 and Orbview is sixteen times larger than the respective multispectral images. As a result if a sensor collected high-resolution multispectral data it could acquire fewer images during every pass. Considering these limitations, it is clear that the most effective solution for providing high-spatial-resolution and high-spectral-resolution remote sensing images is to develop effective image fusion techniques. Image fusion is a technique used to integrate the geometric detail of a high-resolution panchromatic (Pan) image and the color information of a low-resolution multispectral (MS) image to produce a high-resolution MS image. During the last twenty years many methods such as Principal Component Analysis (PCA), Multiplicative Transform, Brovey Transform, IHS Transform have been developed producing good quality fused images. Despite the quite good optical results many research papers have reported the limitations of the above fusion techniques. The most significant problem is color distortion. Another common problem is that the fusion quality often depends upon the operator's fusion experience, and upon the data set being fused. No automatic solution has been achieved to consistently produce high quality fusion for different data sets. More recently new techniques have been proposed such as the Wavelet Transform, the Pansharp Transform and the Modified IHS Transform. Those techniques seem to reduce the color distortion problem and to keep the statistical parameters invariable. In this study we compare the efficiency of eight fusion techniques and more especially the efficiency of Multiplicative Brovey, IHS, Modified IHS, PCA, Pansharp, Wavelet and LMM (Local Mean Matching) fusion techniques for the fusion of Ikonos data. For each merged image we have examined the optical qualitative result and the statistical parameters of the histograms of the various frequency bands, especially the standard deviation All the fusion techniques improve the resolution and the optical result. The Pansharp, the Wavelet and the Modified IHS merging technique do not change at all the statistical parameters of the original images. These merging techniques are proposed if the researcher want to proceed to further processing using for example different vegetation indexes or to perform classification using the spectral signatures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.