Abstract

Perception is one of the main tasks in the functioning of autonomous vehicles. It employs sensors to carry out the detection of objects in the vicinity of the vehicle and the estimation of distance from the vehicle to the detected objects. Sensors of a singular modality have their own individual drawbacks, which can be superseded by utilising a sensor fusion approach. This work provides an approach utilising the fusion of camera and LiDAR sensors. While cameras are good at detecting objects, they fall short in their accuracy of estimating distance. Conversely, LiDARs are excellent at estimating distance to vehicles but exhibit poor object detection capabilities. A fusion of camera and LiDAR to carry out the perception task exhibits better performance in both tasks. An algorithm for distance estimation was developed and tested on a GPU and an Nvidia Jetson TX2 module and was found to be more accurate that previous work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call