Abstract
Fusion of optical and radar remote sensing data is becoming an actual topic of discussion recently in various application areas though the results are not always satisfactory. In this article, we analyse some disturbing aspects of fusing orthoimages from sensors having different acquisition geometries. These aspects arise due to errors in digital elevation models (DEM), used for image orthorectification, and the existence of 3-D objects in the scene which are not accounted in the DEM. We analyse how these effects influence the ground displacement in orthoimages produced from optical and radar data. Further, we propose sensor formations with acquisition geometry parameters which allow to minimise or compensate for ground displacements in different orthoimages due to the above-mentioned effects and to produce good prerequisites for the following fusion for specific application areas, e.g. matching, filling data gaps, classification, etc. To demonstrate the potential of the proposed approach, two pairs of optical–radar data were acquired over the urban area–Munich City, Germany. The first collection of WorldView-1 and TerraSAR-X (TS-X) data followed the proposed recommendations for acquisition geometry parameters, whereas the second collection of IKONOS and TS-X data was acquired with accidental parameters. The experiment confirmed our ideas fully. Moreover, it opens new possibilities for optical and radar image fusion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.