Abstract

The fusion of image data from different sensor types is an important processing step for many remote sensing applications to maximize information retrieval from a given area of interest. The basic process to fuse image data is to select a common coordinate system and resample the data to this new image space. Usually, this is done by orthorectifying all those different image spaces, which means a transformation of the image’s projection plane to a geographic coordinate system. Unfortunately, the resampling of the slant-range based image space of a space borne synthetic aperture radar (SAR) to such a coordinate system strongly distorts its content and therefore reduces the amount of extractable information. The understanding of the complex signatures, which are already hard to interpret in the original data, even gets worse. To preserve maximum information extraction, this paper shows an approach to transform optical images into the radar image space. This can be accomplished by using an optical image along with a digital elevation model and project it to the same slant-range image plane as the one from the radar image acquisition. This whole process will be shown in detail for practical examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call