Abstract

When a strong earthquake occurs, roads are the lifelines of rescue. The rapid development of high-resolution satellite imaging platforms has made the application of remote sensing technology for road damage identification possible. Over the years, road damage identification has required a significant amount of manual involvement, making it difficult to meet the needs of rapid post-disaster response. The automatic recognition of road damage using satellite images has always been difficult. Damaged areas appear in the satellite images with blurry boundaries, versatile sizes, and uneven spatial distributions. With the aim of automatic pixel-level road damage identification, we introduce the first road damage dataset, CAU-RoadDamage, which includes high-resolution satellite images and pixel-level human annotations. Moreover, we propose the application of a pre-trained vision foundation model for the first time to automatically identify road damage. Low-rank adaptation technology is used to fine-tune the foundation model on the satellite images, and two-way attention is used to integrate the foundation model with domain specialist model components. The proposed segmentation model is compared to multiple state-of-the-art methods on the CAU-RoadDamage dataset. Our approach achieves the highest F1 of 76.09%, which is notably higher than that of the other models. The experimental results demonstrate the feasibility of pixel-level road damage recognition and the applicability of vision foundation models for downstream remote sensing tasks. The CAU-RoadDamage dataset will be made publicly available at https://github.com/CAU-HE/RoadDamageExtraction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.