Abstract
The production of long-term landslide maps (LAM) holds crucial importance in estimating landslide activity, vegetation disturbance, and regional stability. However, the availability of LAMs remains limited in many regions, despite the application of various machine-learning methods, deep-learning (DL) models, and ensemble strategies in landslide detection. While transfer learning is considered an effective approach to tackle this challenge, there has been limited exploration and comparison of the temporal transferability of state-of-the-art deep-learning models in the context of LAM production, leaving a significant gap in the research. In this study, an extensive series of tests was conducted to evaluate the temporal transferability of typical semantic segmentation models, specifically U-Net, U-Net 3+, and TransU-Net, using a 10-year landslide-inventory dataset located near the epicenter of the Wenchuan earthquake. The experiment results disclose the feasibility and limitations of implementing transfer-learning methods for LAM production, particularly when leveraging the power of U-Net 3+. Furthermore, following an assessment of the effects of varying data volumes, patch sizes, and time intervals, this study recommends appropriate settings for LAM production, emphasizing the balance between efficiency and production performance. The findings from this study can serve as a valuable reference for devising an efficient and reliable strategy for large-scale LAM production in landslide-prone regions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.