Abstract

ABSTRACT Vegetation segmentation via point cloud data can provide important information for urban planning and environmental protection. The point cloud dataset is obtained using light detection and ranging (LiDAR) or RGB-D images. Oblique photogrammetry has received little attention as another important source of point cloud data. We present a pointwise annotated oblique photogrammetry point-cloud dataset that contains rich RGB information, texture, and structural features. This dataset contains five regions of Bengbu, China, with more than twenty thousand samples in this paper. Obviously, previous indoor point cloud semantic segmentation models are no longer applicable to oblique photogrammetry point clouds. A random sampling point transformer (RSPT) network is proposed to enhance vegetation segmentation accuracy. The RSPT model offers both efficient and lightweight architecture. In RSPT, random point sampling is utilized to downsample point clouds, and a local feature aggregation module based on self-attention is designed to extract additional representation features. The network also incorporated residual and dense connections (ResiDense) to capture both local and comprehensive features. Compared to state-of-the-art models, RSPT achieves notable improvements. The intersection over union (IoU) metric increased from 96.0% to 96.5%, the F1-score increased from 90.8% to 97.0%, and the overall accuracy (OA) increased from 91.9% to 96.9%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call