Abstract

State-of-the-art deep learning technology has been successfully applied to relatively small selected areas of very high spatial resolution (0.15 and 0.25 m) optical aerial imagery acquired by a fixed-wing aircraft to automatically characterize ice-wedge polygons (IWPs) in the Arctic tundra. However, any mapping of IWPs at regional to continental scales requires images acquired on different sensor platforms (particularly satellite) and a refined understanding of the performance stability of the method across sensor platforms through reliable evaluation assessments. In this study, we examined the transferability of a deep learning Mask Region-Based Convolutional Neural Network (R-CNN) model for mapping IWPs in satellite remote sensing imagery (~0.5 m) covering 272 km2 and unmanned aerial vehicle (UAV) (0.02 m) imagery covering 0.32 km2. Multi-spectral images were obtained from the WorldView-2 satellite sensor and pan-sharpened to ~0.5 m, and a 20 mp CMOS sensor camera onboard a UAV, respectively. The training dataset included 25,489 and 6022 manually delineated IWPs from satellite and fixed-wing aircraft aerial imagery near the Arctic Coastal Plain, northern Alaska. Quantitative assessments showed that individual IWPs were correctly detected at up to 72% and 70%, and delineated at up to 73% and 68% F1 score accuracy levels for satellite and UAV images, respectively. Expert-based qualitative assessments showed that IWPs were correctly detected at good (40–60%) and excellent (80–100%) accuracy levels for satellite and UAV images, respectively, and delineated at excellent (80–100%) level for both images. We found that (1) regardless of spatial resolution and spectral bands, the deep learning Mask R-CNN model effectively mapped IWPs in both remote sensing satellite and UAV images; (2) the model achieved a better accuracy in detection with finer image resolution, such as UAV imagery, yet a better accuracy in delineation with coarser image resolution, such as satellite imagery; (3) increasing the number of training data with different resolutions between the training and actual application imagery does not necessarily result in better performance of the Mask R-CNN in IWPs mapping; (4) and overall, the model underestimates the total number of IWPs particularly in terms of disjoint/incomplete IWPs.

Highlights

  • Ice wedges and the resultant ground surface feature, ice-wedge polygons, are ubiquitous in the Arctic

  • That indicates that the model did not perform better or worse in mapping ice-wedge polygons (IWPs) even after training the model with more data of different resolutions

  • Our results show that the Mask Region-Based Convolutional Neural Network (R-CNN) model performed satisfactorily in identifying IWPs (54–72% F1 scores for satellite imagery and 61–70% F1 scores for the unmanned aerial vehicle (UAV) photo) and in delineating the identified IWPs (54–73% F1 scores for satellite imagery and 61–68% F1 scores for the UAV photo)

Read more

Summary

Introduction

Ice wedges and the resultant ground surface feature, ice-wedge polygons, are ubiquitous in the Arctic. Ice wedges occur in areas underlain by permafrost, which is ground that remains below 0 ◦C for at least two consecutive years [1,2], and are typically found in continuous permafrost regions with a mean annual air temperature below −6 ◦C [3]. Repeated over hundreds to thousands of years, this process results in the development of ice wedges up to several meters in width near the top, pushing the soils upward and outward, forming ~0.1 to 0.5 m tall rims on the margins of narrow troughs [4,5,6,7]. The network of troughs or rims above ice wedges may be visible in high-resolution (~1 m) satellite and aerial imagery

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call