Abstract

UAV hyperspectral imagery (HSI) has the unique merits of both a very high spatial and spectral resolution, which provides a high-quality data source for automatic crop mapping. Recently, deep learning has been widely used in crop classification, however, the design of an accurate crop mapping model for HSI data still remains a challenging task. Therefore, this paper aims to propose a novel semantic segmentation model (HSI-TransUNet) for crop mapping, which could make full use of the abundant spatial and spectral information of UAV HSI data simultaneously. Specifically, the proposed HSI-TransUNet belongs to an improved version of TransUNet, and we have made four important modifications for HSI data. Firstly, a spectral-feature attention module is designed for spectral features aggregation in the encoder. Afterwards, a series of Transformer layers with residual connections are designed to learn global contextual features. In the decoder part, sub-pixel convolutions are adopted to avoid the chess-board effect in the segmentation results. Finally, we design a hybrid loss function to further refine the predictions for boundaries. Experiment results indicate that the proposed HSI-TransUNet has achieved good performance in crops identification with an overall accuracy of 86.05%. Ablation studies have been conducted to verify the effectiveness of each refined module in the HSI-TransUNet. Comparison experiments also show that HSI-TransUNet has outperformed several previous semantic segmentation models. The dataset in this paper, UAV-HSI-Crop, is publicly available. http://doi.org/10.57760/sciencedb.01898.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call