Remote sensing allows us to conduct large-scale scientific studies that require extensive mapping and the amalgamation of numerous images. However, owing to variations in radiation, atmospheric conditions, sensor perspectives, and land cover, significant color discrepancies often arise between different images, necessitating color consistency adjustments for effective image mosaicking and applications. Existing methods for color consistency adjustment in remote sensing images struggle with complex one-to-many nonlinear color-mapping relationships, often resulting in texture distortions. To address these challenges, this study proposes a convolutional neural network-based color consistency method for remote sensing cartography that considers both global and local color mapping and texture mapping constrained by the source domain. This method effectively handles complex color-mapping relationships while minimizing texture distortions in the target image. Comparative experiments on remote sensing images from different times, sensors, and resolutions demonstrated that our method achieved superior color consistency, preserved fine texture details, and provided visually appealing outcomes, assisting in generating large-area data products.
Read full abstract