Abstract
Multi-sensor data analysis allows exploiting heterogeneous data regularly acquired by the many available Remote Sensing (RS) systems. Machine- and deep-learning methods use the information of heterogeneous sources to improve the results obtained by using single-source data. However, the State-of-the-Art (SoA) methods analyze either the multi-scale information of multi-sensor multi-resolution images or the time component of image time series. We propose a supervised Deep-Learning (DL) classification method that jointly performs a multi-scale and multi-temporal analysis of RS multi-temporal images acquired by different sensors. The proposed method processes Very-High-Resolution (VHR) images using a Residual Network (ResNet) with a wide receptive field that handles geometrical details and multi-temporal High-Resolution (HR) image using a 3D Convolutional Neural Network (3D-CNN) that analyzes both the spatial and temporal information. The multi-scale and multi-temporal features are processed together in a decoder to retrieve a land-cover map. We tested the proposed method on two multi-sensor and multi-temporal datasets. One is composed of VHR orthophotos and Sentinel-2 multi-temporal images for pasture classification, and another is composed of VHR orthophotos and Sentinel-1 multi-temporal images. Results proved the effectiveness of the proposed classification method.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have