Abstract

Semantic segmentation is a crucial method for recognizing and classifying objects in high-resolution remote sensing images (HRRSIs). However, due to the problems of varying target scale and difficulty in determining the edges of small-scale targets in remote sensing images, traditional semantic segmentation models perform poorly. To address this issue, we propose a multi-scale feature enhancement network (MFENet) to improve the segmentation accuracy of small-scale objects in HRRSIs. MFENet considers the differences between objects of different scales and selects more suitable receptive fields to enhance the extraction of multi-scale semantic features. We propose a composite atrous multi-scale feature fusion (CAMFF) module to enhance the extraction of spatial detail and semantic information of features at different scales. In addition, we propose an improved composite atrous spatial pyramid pooling (C-ASPP) module to enhance the network feature extraction capability across multiple scales. We also propose a network structure that combines the C-ASPP module with the efficient channel attention (ECA) module in parallel, which performs better to extract contextual information. Our experimental evaluations on the Potsdam and Vaihingen datasets demonstrate the effectiveness of our Network, It F1 score reaching 93.33% and 94.66% respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call