Relative radiometric normalization (RRN) is a critical pre-processing step that enables accurate comparisons of multitemporal remote-sensing (RS) images through unsupervised change detection. Although existing RRN methods generally have promising results in most cases, their effectiveness depends on specific conditions, especially in scenarios with land cover/land use (LULC) in image pairs in different locations. These methods often overlook these complexities, potentially introducing biases to RRN results, mainly because of the use of spatially aligned pseudo-invariant features (PIFs) for modeling. To address this, we introduce a location-independent RRN (LIRRN) method in this study that can automatically identify non-spatially matched PIFs based on brightness characteristics. Additionally, as a fast and coregistration-free model, LIRRN complements keypoint-based RRN for more accurate results in applications where coregistration is crucial. The LIRRN process starts with segmenting reference and subject images into dark, gray, and bright zones using the multi-Otsu threshold technique. PIFs are then efficiently extracted from each zone using nearest-distance-based image content matching without any spatial constraints. These PIFs construct a linear model during subject-image calibration on a band-by-band basis. The performance evaluation involved tests on five registered/unregistered bitemporal satellite images, comparing results from three conventional methods: histogram matching (HM), blockwise KAZE, and keypoint-based RRN algorithms. Experimental results consistently demonstrated LIRRN's superior performance, particularly in handling unregistered datasets. LIRRN also exhibited faster execution times than blockwise KAZE and keypoint-based approaches while yielding results comparable to those of HM in estimating normalization coefficients. Combining LIRRN and keypoint-based RRN models resulted in even more accurate and reliable results, albeit with a slight lengthening of the computational time. To investigate and further develop LIRRN, its code, and some sample datasets are available at link in Data Availability Statement.
Read full abstract