Abstract
Owing to the tradeoff between scanning swath and pixel size, currently no satellite Earth observation sensors are able to collect images with high spatial and temporal resolution simultaneously. This limits the application of satellite images in many fields, including the characterization of crop yields or the detailed investigation of human-nature interactions. Spatio-temporal fusion (STF) is a widely used approach to solve the aforementioned problem. Traditional STF methods reconstruct fine-resolution images under the assumption that changes are able to be transferred directly from one sensor to another. However, this assumption may not hold in real scenarios, owing to the different capacity of available sensors to characterize changes. In this paper, we model such differences as a bias, and introduce a new sensor bias-driven STF model (called BiaSTF) to mitigate the differences between the spectral and spatial distortions presented in traditional methods. In addition, we propose a new learning method based on convolutional neural networks (CNNs) to efficiently obtain this bias. An experimental evaluation on two public datasets suggests that our newly developed method achieves excellent performance when compared to other available approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.