Abstract

Detecting land cover change is an essential task in very-high-spatial-resolution (VHR) remote sensing applications. However, because VHR images can capture the details of ground objects, the scenes of VHR images are usually complex. For example, VHR images usually show distinct appearances or features of the same object, aroused by noise, climate conditions, imaging angles, etc. To address this issue, this paper proposes a novel unsupervised approach named bipartite graph attention autoencoders (BGAAE) for VHR image change detection. BGAAE, a further improved way of using dual convolutional autoencoders based on the architecture of image translation, equips the encoder layers with a graph attention mechanism (GAM). To generate an effective difference image, it consists of two additional loss terms: the domain correlation and semantic consistency losses, in addition to the reconstruction loss. The domain correlation loss is designed based on the encoder layers, aiming to enforce the spatial alignment of deep feature representations of the unchanged objects and mitigate the influence of pixel changes on the learning objective. The semantic consistency loss focuses on ensuring the semantic feature consistency of the bitemporal images after transcoding and allows for more flexible transformations. Experimental results on four VHR image datasets demonstrate the superiority of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.