Abstract

Floods are devastating phenomena that occur almost all around the world and are responsible for significant losses, in terms of both human lives and economic damages. When floods occur, one of the challenges that emergency response agencies face is the identification of the flooded area so that access points and safe routes can be determined quickly. This study presents a flood detection methodology that combines transfer learning with vision transformers and satellite images from open datasets. Transformers are powerful models that have been successfully applied in Natural Language Processing (NLP). A variation of this model is the vision transformer (ViT), which can be applied to image classification tasks. The methodology is applied and evaluated for two types of satellite images: Synthetic Aperture Radar (SAR) images from Sentinel-1 and Multispectral Instrument (MSI) images from Sentinel-2. By using a pre-trained vision transformer and transfer learning, the model is fine-tuned on these two datasets to train the models to determine whether the images contain floods. It is found that the proposed methodology achieves an accuracy of 84.84% on the Sentinel-1 dataset and 83.14% on the Sentinel-2 dataset, revealing its insensitivity to the image type and applicability to a wide range of available visual data for flood detection. Moreover, this study shows that the proposed approach outperforms state-of-the-art CNN models by up to 15% on the SAR images and 9% on the MSI images. Overall, it is shown that the combination of transfer learning, vision transformers, and satellite images is a promising tool for flood risk management experts and emergency response agencies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.