Abstract

ABSTRACT Accurately identifying tree species is crucial in digital forestry. Several airborne LiDAR-based classification frameworks have been proposed to facilitate work in this area, and they have achieved impressive results. These models range from the classification of characterization parameters based on feature engineering extraction to end-to-end classification based on deep learning. However, in practical applications, the loud feature noises of a single sample at varying vertical heights can cause misjudgment between intraspecific samples, thereby limiting classification accuracy. This may be exacerbated by the scanning conditions and geographic environment. To address this challenge, a deeply supervised tree classification network (DSTCN) is designed in this article, which introduced a height-intensity dual attention mechanism to deliver improved classification performance. DSTCN takes the histogram feature descriptors of each tree slice as the input vector and considers the features of each slice in combination with its height and intensity information, utilizing slice features with different information gains more effectively, and removing the accuracy limitations imposed by feature noise at varying vertical heights. Experimental results from the classification of seven tree species in a mixed forest in Baden-Württemberg, southwestern Germany indicate that DSTCN (MAF = 0.94, OA = 0.94, Kappa = 0.93, FISD = 0.02) outperforms the two commonly used methods, based on Point Net++ (MAF = 0.88, OA = 0.88, Kappa = 0.86, FISD = 0.08) and BP Net (MAF = 0.86, OA = 0.87, Kappa = 0.85, FISD = 0.06) respectively, in terms of accuracy, stability, and robustness. This method integrates feature engineering and deep network models to achieve precise and balanced classification outcomes of tree species. The simplified design enables efficient forestry decision-making and presents innovative ideas and a method for employing airborne LiDAR technology in tree species identification of large-scale multi-layer mixed stands.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.