Abstract
The efficient and accurate acquisition of crop spatial planting structure information is of great significance to ensure national food security. Current crop classification studies have still not achieved optimal results in accurate crop classification identification due to the complex heterogeneous details of clear images and insufficient analysis and utilization of information. This study takes the Yuncheng-Linfen Basin in Shanxi, China as the study area and uses the ReliefF-RFE feature selection algorithm to achieve data dimensionality reduction. Contextual semantic feature aggregation, spatial channel attention mechanism, and hierarchical refinement strategy are used to fuse the multi-scale feature information in the extracted remote sensing images from the top down and bottom up. This study shows that 1) the ReliefF-RFE algorithm can effectively reduce the dimensionality of 224-dimensional multisource features and generate a 31-dimensional optimal feature space subset with strong interpretation to improve the computational efficiency of the model; 2) the multi-scale feature fusion model constructed in this study is better than UNet, ResNet, DeepLabv3+, HRNet and Swin Transformer in classification recognition of winter wheat and corn, with an overall accuracy of 95.76% and 93.44%; 3) A multiple feature fusion strategy designed through ablation experiments was evaluated to evaluate the reasonableness of each module and validate the irreplaceability and superiority of the model for improving crop accuracy in complex and heterogeneous terrain; 4) the multi-scale feature fusion model converts deep abstract features into local detail information by cross-scale contextual aggregation of deep and shallow multi-scale features, it obtains good extraction effect on fragmented small plots, and the reduction of local fine details of features is significantly better than other models. The multi-scale feature fusion classification model constructed in this study fully considers the interpretability of the input features and fuses the semantic features at different scales to eliminate the “semantic gap”, which shows that the model has strong stability and robustness and provides a reference for fast and accurate mapping of Sentinel-2 images of crops in large areas.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.