Abstract
With the development of convolutional neural networks, the semantic segmentation of remote sensing images has been widely developed, but there are still some unsolved problems in this field due to the lack of multiscale information and the feature mismatch at the upsampling process. To solve these problems, we propose a network called multiscale feature fusion and alignment network (MFANet). MFANet is composed of an encoder and a decoder. The encoder contains a fully convolutional network, a multilevel feature fusion block (MLFFB), and a multiscale feature pyramid (MSFP). These subnetworks can obtain fine-grained feature maps that are full of multiscale and global features and improve segmentation results at multiple object scales. Moreover, MFANet uses a light convolution subnetwork, called decoder, to upsample the segmentation map stage by stage. Combining three scales of features, the decoder can promote the feature alignment at the upsampling stage. Along with the decoder, MFANet utilizes a multistage supervision loss to enhance the localization performance and boundary regression ability. Benefitting from the encoder and decoder structure and the innovative components inside encoder, MFANet is very powerful for the semantic segmentation of remote sensing images and can suit the complicated environment. We evaluate our MFANet on the Vaihingen and Potsdam data sets, and it outperforms the state-of-art methods both in the metric and visual effect.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.