Abstract

Using an unmanned aerial vehicle (UAV) paired with image semantic segmentation to classify land cover within natural vegetation can promote the development of forest and grassland field. Semantic segmentation normally excels in medical and building classification, but its usefulness in mixed forest-grassland ecosystems in semi-arid to semi-humid climates is unknown. This study proposes a new semantic segmentation network of LResU-net in which residual convolution unit (RCU) and loop convolution unit (LCU) are added to the U-net framework to classify images of different land covers generated by UAV high resolution. The selected model enhanced classification accuracy by increasing gradient mapping via RCU and modifying the size of convolution layers via LCU as well as reducing convolution kernels. To achieve this objective, a group of orthophotos were taken at an altitude of 260 m for testing in a natural forest-grassland ecosystem of Keyouqianqi, Inner Mongolia, China, and compared the results with those of three other network models (U-net, ResU-net and LU-net). The results show that both the highest kappa coefficient (0.86) and the highest overall accuracy (93.7%) resulted from LResU-net, and the value of most land covers provided by the producer’s and user’s accuracy generated in LResU-net exceeded 0.85. The pixel-area ratio approach was used to calculate the real areas of 10 different land covers where grasslands were 67.3%. The analysis of the effect of RCU and LCU on the model training performance indicates that the time of each epoch was shortened from U-net (358 s) to LResU-net (282 s). In addition, in order to classify areas that are not distinguishable, unclassified areas were defined and their impact on classification. LResU-net generated significantly more accurate results than the other three models and was regarded as the most appropriate approach to classify land cover in mixed forest-grassland ecosystems.

Highlights

  • As one of the world’s largest renewable natural resources, mixed forest-grassland resources directly affect the development of agriculture, forestry and other industries (Langley et al 2001; Ma et al 2010)

  • The results show that both the highest kappa coefficient (0.86) and the highest overall accuracy (93.7%) resulted from LResU-net, and the value of most land covers provided by the producer’s and user’s accuracy generated in LResUnet exceeded 0.85

  • Christian and Christiane (2014) compared forest point cloud data collected from unmanned aerial vehicle (UAV) images and airborne LiDAR and concluded that more information was captured through UAV image data

Read more

Summary

Introduction

As one of the world’s largest renewable natural resources, mixed forest-grassland resources directly affect the development of agriculture, forestry and other industries (Langley et al 2001; Ma et al 2010). According to Scurlock et al (2002) and Dong et al (2017b), mixed forest-grassland ecosystems are approximately 3.2 billion hectares, accounting for 40% of the total land area. Using remote sensing to classify land cover in a mixed forest-grassland ecosystem can. It has increasingly become an opportunity to attach high resolution cameras (Huseyin et al 2019), LiDAR (Yang et al 2020), thermal infrared (Crusiol et al 2019) and hyperspectral cameras (Clark et al 2018) on UAV to better collect field information for land classification. Hyperspectral imaging may be limited when used on grassland areas with low-level color contrast as it creates a large amount of redundant data (Grigorieva et al 2020). Using UAV high resolution cameras is one of the most preferred methods to classify land cover in a mixed forestgrassland ecosystem

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call