Abstract

Recent advances in deep learning (DL) and unmanned aerial vehicle (UAV) technologies have made it possible to monitor salt marshes more efficiently and precisely. However, studies have rarely compared the classification performance of DL with the pixel-based method for coastal wetland monitoring using UAV data. In particular, many studies have been conducted at the landscape level; however, little is known about the performance of species discrimination in very small patches and in mixed vegetation. We constructed a dataset based on UAV-RGB data and compared the performance of pixel-based and DL methods for five scenarios (combinations of annotation type and patch size) in the classification of salt marsh vegetation. Maximum likelihood, a pixel-based classification method, showed the lowest overall accuracy of 73%, whereas the U-Net classification method achieved over 90% accuracy in all classification scenarios. As expected, in a comparison of pixel-based and DL methods, the DL approach achieved the most accurate classification results. Unexpectedly, there was no significant difference in overall accuracy between the two annotation types and labeling data sizes in this study. However, when comparing the classification results in detail, we confirmed that polygon-type annotation was more effective for mixed-vegetation classification than the bounding-box type. Moreover, the smaller size of labeling data was more effective for detecting small vegetation patches. Our results suggest that a combination of UAV-RGB data and DL can facilitate the accurate mapping of coastal salt marsh vegetation at the local scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call