Abstract

In recent years, AI and Deep Learning (DL) methods have been widely used for object classification, recognition, and segmentation of high-resolution multispectral remote sensing images. These DL-based solutions perform better compare to traditional spectral algorithms but still suffer from insufficient optimization of global and local features of object context. In addition, failure of code-data isolation and/or disclosure of detailed eigenvalues causes serious privacy and even secret leakage due to the sensitivity of high-resolution remote sensing data and their processing mechanisms. In this paper, Class Feature (CF) modules have been presented in the decoder part of an attention-based CNN network to distinguish between building and non-building (background) area. In this way, context features of a focused object can be extracted with more details being processed, whilst the resolution of images is maintained. The reconstructed local and global feature values and dependencies in the proposed model are maintained by reconfiguring multiple effective attention modules with contextual dependencies to achieve better results for the eigenvalue. According to quantitative results and their visualization, the proposed model has depicted better performance over others' work using two large-scale building remote sensing datasets. The F1-score of this model reached 87.91 and 89.58 on WHU Buildings Dataset and Massachusetts Buildings Dataset, respectively, which exceeded the other semantic segmentation models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.