Abstract

Mapping the distribution of forest resources at tree species levels is important due to their strong association with many quantitative and qualitative indicators. With the ongoing development of artificial intelligence technologies, the effectiveness of deep-learning classification models for high spatial resolution (HSR) remote sensing images has been proved. However, due to the poor statistical separability and complex scenarios, it is still challenging to realize fully automated and highly accurate forest types at tree species level mapping. To solve the problem, a novel end-to-end deep learning fusion method for HSR remote sensing images was developed by combining the advantageous properties of multi-modality representations and the powerful features of post-processing step to optimize the forest classification performance refined to the dominant tree species level in an automated way. The structure of the proposed model consisted of a two-branch fully convolutional network (dual-FCN8s) and a conditional random field as recurrent neural network (CRFasRNN), which named dual-FCN8s-CRFasRNN in the paper. By constructing a dual-FCN8s network, the dual-FCN8s-CRFasRNN extracted and fused multi-modality features to recover a high-resolution and strong semantic feature representation. By imbedding the CRFasRNN module into the network as post-processing step, the dual-FCN8s-CRFasRNN optimized the classification result in an automatic manner and generated the result with explicit category information. Quantitative evaluations on China’s Gaofen-2 (GF-2) HSR satellite data showed that the dual-FCN8s-CRFasRNN provided a competitive performance with an overall classification accuracy (OA) of 90.10%, a Kappa coefficient of 0.8872 in the Wangyedian forest farm, and an OA of 74.39%, a Kappa coefficient of 0.6973 in the GaoFeng forest farm, respectively. Experiment results also showed that the proposed model got higher OA and Kappa coefficient metrics than other four recently developed deep learning methods and achieved a better trade-off between automaticity and accuracy, which further confirmed the applicability and superiority of the dual-FCN8s-CRFasRNN in forest types at tree species level mapping tasks.

Highlights

  • Forest classification at tree species’ levels is important for the management and sustainable development of forest resources [1]

  • The results showed that the FCN8s model with the CRFasRNN post-processing module had better optimization effects on the classification results compared with the optimization of the dual structure on the FCN8s model

  • The results showed that the accuracy of the dual-FCN8s-CRFasRNN model with green NDVI (GNDVI) index was with overall classification accuracy (OA) of 89.47% and 73.70% and Kappa coefficient of 0.8804 and 0.6898 for Wangyedian and GaoFeng forest farm, respectively, which was very similar with the model using Normalized Difference Vegetation Index (NDVI) index and the difference between them was less than 1%

Read more

Summary

Introduction

Forest classification at tree species’ levels is important for the management and sustainable development of forest resources [1]. A growing number of studies have been conducted on this topic [5,6]. One of the key limitations is that there is poor statistical separability of the images spectral range as there are a limited number of wavebands in such images [9]. In the case of forests with complex structures and more tree species, the phenomenon of “same objects with different spectra” and “different objects with the same spectra” can lead to serious difficulties in extracting relevant information. It raises the requirements for advanced forest information extraction methods

Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.