Fast assessment of damaged buildings is important for post-disaster rescue operations. Building damage detection leveraging image processing and machine learning techniques has become a popular research focus in recent years. Although supervised learning approaches have made considerable improvement for damaged building assessment, rapidly deploying supervised classification is still difficult due to the complexity in obtaining a large number of labeled samples in the aftermath of disasters. We propose an unsupervised self-attention domain adaptation (USADA) model, which transforms instances of the source domain to those of the target domain in pixel space, to address the aforementioned issue. The proposed USADA consists of three parts: a set of generative adversarial networks (GANs), a classifier, and a self-attention module. The GAN adapts source domain images to ensure their similarity to target domain images. Once adapted, the classifier can be trained using the adapted images along with the original images of the source domain to classify damaged buildings. The self-attention module is introduced to maintain the foreground of the generated images conditioned on source domain images for generating plausible samples. As a case study, aerial images of Hurricane Sandy, Maria, and Irma, are used as the source and target domain datasets in our experiments. Experimental results demonstrate that classification accuracies of 68.1% and 84.1% are achieved, and our method obtains improvements of 2.0% and 3.6% against pixel-level domain adaptation, which is the basis of our model.
Read full abstract