Abstract

Autonomous vehicles are one of the most attractive applications for light detection and ranging sensors, where they help with scene understanding. For this understanding, object detection is crucial, and it must be done in a frame by frame basis. This detection on a single frame is a challenging task due to the sparse and disordered nature of the information. This paper presents an alternative spherical representation for this data aiming to improve object detection. This proposal registers the light detection and ranging data in a 2-dimensions angle map using most of the 3-dimensions points in three layers, adding reflectivity information, and a logarithmic representation of distance. For evaluating this representation, we employed an object detector based on the algorithm: you only look once version 3, and we used a public reference dataset of 3-dimensional objects for training. This framework yielded a classification accuracy of 85.9% and 74.5% of intersection over union factor when estimating seven classes simultaneously. This approach presents an alternative for processing this data that helps to benefit the most from the light detection and ranging information with high accuracy, helping in the reduction of the associated risks of autonomous vehicles.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call