Abstract
Abstract. Floods are considered one of the most serious crises, with severe consequences such as loss of life, destruction of infrastructure, and economic disruption. In recent years, deep learning has gained popularity for fast and accurate flood mapping from synthetic aperture radar (SAR) for damage assessment and proactive mitigation. However, due to the complex characteristics of SAR images, accurate flood mapping remains challenging. In this study, we propose a novel Prior-Diagonal Cross Attention-guided transformer (PDCA-Former) network for flood mapping from SAR images. Specifically, PDCA-Former adopts Prior Siamese Feature Extraction (PSFE) to extract multi-scale deep features from the input SAR images. Additionally, we propose a novel Diagonal Cross-Attention Module (DCAM) to capture relational information of all pixel positions on the entire image. DCAM is integrated into the Transformer to acquire contextual tokens with spatio-temporal information from prior features, resulting in immersed maps. To investigate the potential of SAR images and the proposed PDCA-Former for effective flood detection and estimation of the extent of damaged farmland around the confluence of two rivers, this study chose the Sudanese city of Khartoum as an experimental study area. The experimental results show that PDCA-Former outperforms the latest comparator methods in terms of F1 by 88.9% and IoU by 85.7%. We conclude that PDCA-Former offers a promising solution for accurate and efficient flood mapping from SAR imagery that can be quickly generalized to other regions. Therefore, it can significantly aid disaster management efforts on vulnerable communities.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.