Abstract
The semantic segmentation of remote sensing images is a critical and challenging task. How to easily and reliably segment useful information from vast remote sensing images is a significant issue. Many methods based on convolutional neural networks have been widely explored to obtain more accurate segmentation from remote sensing images. However, due to the uniqueness of remote sensing images, such as the dramatic changes in the scale of the target object, the results are not satisfactory. To solve the problem, a special network is designed: (1) Create a new backbone network. Compared with ResNet50, the proposed method extracts features of varying sizes more effectively. (2) Reduce spatial information loss. Building a hybrid location module to compensate for the position loss caused by the down-sampling operation. (3) Models with high discriminant ability. In order to improve the discrimination ability of the model, a novel auxiliary loss function is designed to constrain the distance between inter-class and intra-class. The proposed algorithm is tested on remote sensing datasets (e.g., NWPU-45, DLRSD, and WHDLD). The experimental results show that this method obtains the best results and achieves state-of-the-art performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.