Abstract

Change detection in synthetic aperture radar (SAR) images is an important part of remote sensing (RS) image analysis. Contemporary researchers have concentrated on the spatial and deep-layer semantic information while giving little attention to the extraction of multidimensional and shallow-layer feature representations. Furthermore, change detection relies on patch-wise training and pixel-to-pixel prediction while the accuracy of change detection is sensitive to the introduction of edge noise and the availability of original position information. To address these challenges, we propose a new neural network structure that enables spatial-frequency-temporal feature extraction through end-to-end training for change detection between SAR images from two different points in time. Our method uses image patches fed into three parallel network structures: a densely connected convolutional neural network (CNN), a frequency domain processing network based on a discrete cosine transform (DCT), and a recurrent neural network (RNN). Multi-dimensional feature representations alleviate speckle noise and provide comprehensive consideration of semantic information. We also propose an ensemble multi-region-channel module (MRCM) to emphasize the central region of each feature map, with the most critical information in each channel employed for binary classification. We validate our proposed method on four benchmark SAR datasets. Experimental results demonstrate the competitive performance of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call