Abstract
High-precision and high-efficiency Semantic segmentation of high-resolution remote sensing images is a challenge. Existing models typically require a significant amount of training data to achieve good classification results and have numerous training parameters. A novel model called MST-DeepLabv3+ was suggested in this paper for remote sensing image classification. It’s based on the DeepLabv3+ and can produce better results with fewer train parameters. MST-DeepLabv3+ made three improvements: (1) Reducing the number of model parameters by substituting MobileNetV2 for the Xception in the DeepLabv3+’s backbone network. (2) Adding the attention mechanism module SENet to increase the precision of semantic segmentation. (3) Increasing Transfer Learning to enhance the model's capacity to recognize features, and raise the segmentation accuracy. MST-DeepLabv3+ was tested on international society for photogrammetry and remote sensing (ISPRS) dataset, Gaofen image dataset (GID), and practically applied to the Taikang cultivated land dataset. On the ISPRS dataset, the mean intersection over union (MIoU), overall accuracy (OA), Precision, Recall, and F1-score are 82.47%, 92.13%, 90.34%, 90.12%, and 90.23%, respectively. On the GID dataset, these values are 73.44%, 85.58%, 84.10%, 84.86%, and 84.48%, respectively. The results were as high as 90.77%, 95.47%, 95.28%, 95.02%, and 95.15% on the Taikang cultivated land dataset. The experimental results indicate that MST-DeepLabv3+ effectively improves the accuracy of semantic segmentation of remote sensing images, recognizes the edge information with more completeness, and significantly reduces the parameter size.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.