Abstract

This research aims to utilise heterogeneous sensor fusion using 3D Light Detection and Ranging (LiDAR) and cameras, combined with an object recognition system and a ranging system, to construct an edge computing platform such that a vehicle equipped with the platform can perform computations offline in real time. This work comprises two main sections: the first is heterogeneous fusion, and the second is obstacle recognition and ranging detection. To achieve heterogeneous sensor fusion, 3D–3D point matching was used to find rigid body transformation between two sensors and finally project the LiDAR 3D point cloud image onto the 2D image. For object recognition, YOLOv4-Tiny was used as the detection network. A lightweight network architecture and high computational speed could be effectively used on edge computing hardware with limited performance. Further, by drawing the bounding box, we could detect the point cloud within the bounding box to estimate the distance to the obstacle. For detecting distance, we conducted experiments in two ways: ‘minimum point in box’ and ‘median point in box’ and compared the results. With heterogeneous sensor fusion, object recognition and the ranging system, detecting the category and distance of obstacles ahead of the vehicle was possible in real time. Furthermore, integrating the edge computing platform architecture enabled moving the entire system offline, making it an independent system that returns results in real time. Finally, a dynamic test was conducted on a road. The experiment showed that the detection speed of YOLOv4-Tiny in the dynamic test was higher than 60 FPS, and the accuracy rate surpassed 70%. Furthermore, the distance detection error of the 3D LiDAR was less than 3 cm, which is sufficiently accurate to be applied to complex environments on roads.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call