Abstract

In this article, we propose an effective siamese feature pyramid network (FPN), ForkNet, for remote sensing change detection (RSCD). We find that the siamese network structure, which is widely used for RSCD, contains only one downsampling network in the feature extraction stage, e.g., VGG16 and ResNet-18, to extract the deep features of a single image, such that the features have a large semantic gap between high-level feature maps and low-level feature maps. The low-level feature maps with weak semantics may become a bottleneck of network performance. Thus, we apply an FPN to the feature extraction stage to generate feature representations with strong semantics at each level. Further, we design a cross-resolution attention module (CRAM) to aggregate contextual information across resolutions and naturally serve as a bridge for exchanging information across different resolution feature maps. The siamese FPN equipped with the CRAM is called ForkNet. To better train ForkNet, we extend the Tversky loss to a novel loss, pyramid Tversky loss, which is capable of supervising subregions at different scales to obtain more fine-grained detection results. Using pyramid Tversky loss together with Focal loss, our ForkNet achieves state-of-the-art detection performance on two challenging datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call