Abstract

The application of Deep Learning techniques to point clouds for urban object classification is limited by the large number of samples needed. Acquiring and tagging point clouds is more expensive and tedious labour than its image equivalent process. Point cloud online datasets contain few samples for Deep Learning or not always the desired classes This work focuses on minimizing the use of point cloud samples for neural network training in urban object classification. The method proposed is based on the conversion of point clouds to images (pc-images) because it enables: the use of Convolutional Neural Networks, the generation of several samples (images) per object (point clouds) by means of multi-view, and the combination of pc-images with images from online datasets (ImageNet and Google Images). The study is conducted with ten classes of objects extracted from two street point clouds from two different cities. The network selected for the job is the InceptionV3. The training set consists of 5000 online images with a variable percentage (0% to 10%) of pc-images. The validation and testing sets are composed exclusively of pc-images. Although the network trained only with online images reached 47% accuracy, the inclusion of a small percentage of pc-images in the training set improves the classification to 99.5% accuracy with 6% pc-images. The network is also applied at IQmulus & TerraMobilita Contest dataset and it allows the correct classification of elements with few samples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call