Abstract

Understanding the spatial distribution of irrigated croplands is crucial for food security and water use. To map land cover classes with high-spatial-resolution images, it is necessary to analyze the semantic information of target objects in addition to the spectral or spatial–spectral information of local pixels. Deep convolutional neural networks (DCNNs) can characterize the semantic features of objects adaptively. This study uses DCNNs to extract irrigated croplands from Sentinel-2 images in the states of Washington and California in the United States. We integrated the DCNNs of 101 layers, discarded pooling layers, and employed dilation convolution to preserve location information; these are models which were used based on fully convolutional network (FCN) architectures. The findings indicated that irrigated croplands may be effectively detected at various phases of crop growth in the fields. A quantitative analysis of the trained models revealed that the three models in the two states had the lowest values of Intersection over Union (IoU) and Kappa, i.e., 0.88 and 0.91, respectively. The deep models’ temporal portability across different years was acceptable. The lowest values of recall and OA (overall accuracy) from 2018 to 2021 were 0.91 and 0.87, respectively. In Washington, the lowest OA value from 10 to 300 m resolution was 0.76. This study demonstrates the potential of FCNs + DCNNs approaches for mapping irrigated croplands across large regions, providing a solution for irrigation mapping. The spatial resolution portability of deep models could be improved further by designing model architectures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call