Abstract

Addressing the need for high-quality, time efficient, and easy to use annotation tools, we propose SAnE, a semiautomatic annotation tool for labeling point cloud data. The contributions of this paper are threefold: (1) we propose a denoising pointwise segmentation strategy enabling a fast implementation of one-click annotation, (2) we expand the motion model technique with our guided-tracking algorithm, and (3) we provide an interactive, yet robust, open-source point cloud annotation tool, targeting both skilled and crowdsourcing annotators. Using the KITTI dataset, we show that the SAnE speeds up the annotation process by a factor of 4 while achieving Intersection over Union (IoU) agreements of 84%. Furthermore, in experiments using crowdsourcing services, SAnE achieves more than 20% higher IoU accuracy compared to the existing annotation tool and its baseline, while reducing the annotation time by a factor of 3. This result shows the potential of SAnE, for providing fast and accurate annotation labels for large-scale datasets with a significantly reduced price. SAnE is open-sourced at https://github.com/hasanari/sane.

Highlights

  • The growing popularity of high-frequency point cloud data, scanning real-world driving scenes, fuels up a new research stream on 3D perception systems

  • EXPERIMENTAL SETUP We have evaluated our approach on the KITTI tracking dataset [15], and used the training data with their labels for our experiments

  • It should be noted that the mean Intersection over Union (IoU) agreement between Ground Truth labeling (GT) and KITTI labels is 72.77%

Read more

Summary

Introduction

The growing popularity of high-frequency point cloud data, scanning real-world driving scenes, fuels up a new research stream on 3D perception systems. This is enriching the perception systems discussion previously centered around image analysis (from cameras) to the realm of point cloud analysis, which includes point cloud classification, segmentation, and object detection [1], [2]. Several large driving scene datasets, containing point cloud data, have recently been published by self-driving tech companies, such as ArgoVerse, Waymo and Lyft [3], highlighting the trend of collecting and using Light Detection and Ranging (LiDAR) point cloud data in the selfdriving technologies that are being developed and deployed in the real world. I.e. labeling objects in point cloud scenes, is necessary to enable the learning process.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call