Abstract

For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor’s off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.

Highlights

  • Owing to the frequent accessibility of very-high-resolution (VHR) satellite images, time-series analysis using VHR multi-temporal images has been conducted for a wide range of remote-sensing applications [1,2,3,4,5,6]

  • This paper proposes an automated geo/co-registration framework for full-scene images acquired from a VHR optical satellite sensor

  • During the fine co-registration step, the dominant registration noise (RN) pixels between the ortho-rectified rectified images are extracted, which are used to identify a large number of well‐distributed corresponding points (CPs) over images are extracted, which are used to identify a large number of well-distributed CPs over the entire the entire overlapping region between the images

Read more

Summary

Introduction

Owing to the frequent accessibility of very-high-resolution (VHR) satellite images, time-series analysis using VHR multi-temporal images has been conducted for a wide range of remote-sensing applications [1,2,3,4,5,6]. For the successful applications using VHR multi-temporal satellite images, accurate georegistration to the map coordinates in approximately pixel-level accuracy should be conducted [7]. Most VHR satellite images provide rational polynomial coefficients (RPCs) that represent the ground-to-image geometry, allowing for photogrammetric processing without requiring a physical sensor model. To achieve higher accuracy, such as up to a one-pixel level, the bias-compensation of RPCs is often manually conducted utilizing global navigation satellite system (GNSS) survey or accurate reference data. To automate the Sensors 2018, 18, 1599; doi:10.3390/s18051599 www.mdpi.com/journal/sensors

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call