As Light Detection and Range (LiDAR) technology rapidly advances, it is becoming increasingly viable as a solution to collect vehicle classification data. The main challenge of LiDAR compared with video in vehicle classification lies in its resolution, which limits the ability of LiDAR-based models to classify vehicles in detail from a single captured frame. This paper proposes a novel framework for reconstructing the vehicle point clouds generated by a roadside LiDAR sensor, by considering ground plane constraints and developing a bootstrap aggregating deep neural network (bagging DNN) model to classify the reconstructed vehicle point clouds, according to the US Federal Highway Administration (FHWA) axle-based vehicle classification scheme. First, a probabilistic-based registration algorithm is used to estimate the transformation matrix between consecutive frames of each vehicle point cloud. Then, a multiway registration is conducted to fine-tune the estimated transformation matrices to rebuild the 3D model of each moving vehicle. Second, key features are extracted from the reconstructed vehicle models and fed into a bagging DNN model to provide classifications based on the FHWA scheme. The classification model with the reconstruction framework outperforms the latest developed LiDAR-based FHWA classification model in terms of both accuracy and robustness. The model has an 83 percent average correct classification rate (CCR) on the test set. Remarkably, the proposed model can accurately distinguish Classes 5 and 8 trucks, which have overlapping axle configurations, with a 97 percent and a 90 percent CCR, respectively.
Read full abstract