Abstract
Due to hardware technology limitations, satellite sensors are unable to capture images with high temporal, spatial, and spectral resolutions simultaneously. However, the Gaofen-1 satellite overcomes this challenge by incorporating 2-meter panchromatic, 8-meter multispectral, and 16-meter wide-field cameras, allowing for the integration of images from these sensors. To address this issue, we propose a study on the spatio-temporal-spectral fusion method for Gaofen-1 images, aiming to achieve more comprehensive structures. Inspired by the diffusion model, which learns the data distribution of the target image, we propose a new network utilizing an enhanced diffusion framework. The network incorporates both structural and spectral constraints to guide the fusion process. This work represents the first application of the diffusion model to spatio-temporal-spectral fusion, specifically synthesizing the 2-meter multispectral images with dense temporal resolution. To assess fusion quality, we have developed a benchmark dataset. During the validation stage, we evaluate the radiometric deviation, structural similarity, and spectral fidelity between the fused 2-meter multispectral images and the reference images. Both visual and quantitative assessments demonstrate that our newly proposed method work well for the Gaofen-1 fusion.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have