Abstract

AbstractUnderstanding environment around the vehicle is essential for automated driving technology. For this purpose, an omnidirectional LiDAR is used for obtaining surrounding information and point cloud based semantic segmentation methods have been proposed. However, these methods requires a time to acquire point cloud data and to process the point cloud, which causes the significant positional shift of objects on the practical application scenario. In this paper, we propose a 1D self-attention network (1D-SAN) for LiDAR-based point cloud semantic segmentation, which is based on the 1D-CNN for real-time pedestrian detection of an omnidirectional LiDAR data. Because the proposed method can sequentially process during data acquisition in a omnidirectional LiDAR, we can reduce the processing time and suppress the positional shift. Moreover, for improving segmentation accuracy, we use the intensity as an input data and introduce self-attention mechanism into the proposed method. The intensity enables to consider the object texture. The self-attention mechanism can consider the relationship between point clouds. The experimental results with the SemanticKITTI dataset show that the intensity input and the self-attention mechanism in the proposed method improves the accuracy. Especially, the self-attention mechanism contributes to improving the accuracy of small objects. Also, we show that the processing time of the proposed method is faster than the other point cloud segmentation methods.KeywordsPoint cloudSemantic segmentationSelf-attentionLiDARAutonomous driving

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call