Abstract

Convolutional neural networks (CNNs) have attracted much attention in change detection (CD) for their superior feature learning ability. However, most of the existing CNN-based CD methods adopt an early- or late-fusion strategy to fuse low-level spatial details or high-level semantic information. So far, the impact of multilevel fusion strategy across multitemporal hyperspectral (HS) images, and its application to CD, remains unexplored. In this article, we propose a multilevel encoder–decoder attention network (ML-EDAN), which allows the network to make full use of the hierarchical features for CD in HS images. A two-stream encoder–decoder framework is taken as the backbone to exploit and fuse the hierarchical features from all the convolutional layers of multitemporal HS images. Within the encoder–decoder, a contextual-information-guided attention module is developed to yield more effective spatial–spectral feature transfer in the network. After fully obtaining the multilevel hierarchical features, the long short-term memory (LSTM) subnetwork is devised to analyze temporal dependence between multitemporal images. Moreover, the proposed ML-EDAN is trained in an end-to-end manner with a new joint loss function considering both reconstruction error and pixelwise classification error. The experiments are conducted on three datasets, demonstrating the effectiveness of the proposed ML-EDAN in HS CD in comparison with widely accepted state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call