Abstract

Natural and man-made disasters cause huge damage to built infrastructures and results in loss of human lives. The rehabilitation efforts and rescue operations are hampered by the non-availability of accurate and timely information regarding the location of damaged infrastructure and its extent. In this paper, we model the destruction in satellite imagery using a deep learning model employing a weakly-supervised approach. In stark contrast to previous approaches, instead of solving the problem as change detection (using pre and post-event images), we model to identify destruction itself using a single post-event image. To overcome the challenge of collecting pixel-level ground truth data mostly used during training, we only assume image-level labels, representing either destruction is present (at any location) in a given image or not. The proposed attention-based mechanism learns to identify the image-patches with destruction automatically under the sparsity constraint. Furthermore, to reduce false-positive and improve segmentation quality, a hard negative mining technique has been proposed that results in considerable improvement over baseline. To validate our approach, we have collected a new dataset containing destruction and non-destruction images from Indonesia, Yemen, Japan, and Pakistan. On testing-dataset, we obtained excellent destruction results with pixel-level accuracy of 93% and patch level accuracy of 91%. The source code and dataset will be made publicly available. .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call