Abstract

The data processing of airborne full-waveform light detection and ranging (LiDAR) systems has become a research hotspot in the LiDAR field in recent years. However, the accuracy and reliability of full-waveform classification remain a challenge. The manual features and deep learning techniques in the existing methods cannot fully utilize the temporal features and spatial information in the full waveform. On the premise of preserving temporal dependencies, we convert them into Gramian angular summation field (GASF) images using the polar coordinate method. By introducing spatial attention modules into the neural network, we emphasize the importance of the location of texture information in GASF images. Finally, we use open source and simulated data to evaluate the impact of using different network architectures and transformation methods. Compared with the performance of the state-of-art method, our proposed method can achieve higher precision and F1 scores. The results suggest that transforming the full waveform into GASF images and introducing a spatial attention module outperformed other classification methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.