Abstract

The 3D vehicle trajectory in complex traffic conditions such as crossroads and heavy traffic is practically very useful in autonomous driving. In order to accurately extract the 3D vehicle trajectory from a perspective camera in a crossroad where the vehicle has an angular range of 360 degrees, problems such as the narrow visual angle in single-camera scene, vehicle occlusion under conditions of low camera perspective, and lack of vehicle physical information must be solved. In this paper, we propose a method for estimating the 3D bounding boxes of vehicles and extracting trajectories using a deep convolutional neural network (DCNN) in an overlapping multi-camera crossroad scene. First, traffic data were collected using overlapping multi-cameras to obtain a wide range of trajectories around the crossroad. Then, 3D bounding boxes of vehicles were estimated and tracked in each single-camera scene through DCNN models (YOLOv4, multi-branch CNN) combined with camera calibration. Using the abovementioned information, the 3D vehicle trajectory could be extracted on the ground plane of the crossroad by calculating results obtained from the overlapping multi-camera with a homography matrix. Finally, in experiments, the errors of extracted trajectories were corrected through a simple linear interpolation and regression, and the accuracy of the proposed method was verified by calculating the difference with ground-truth data. Compared with other previously reported methods, our approach is shown to be more accurate and more practical.

Highlights

  • With the development of intelligent transportation systems (ITS), it is possible to obtain a large amount of vehicle trajectory data that reflect the movement of vehicles on the road from fixed cameras [1]

  • Compared to 3D trajectories, 2D trajectories do not include any physical information of objects in the real world; so it is difficult to apply them in practical applications such as collision detection and warning [14] and traffic accident situation reconstruction [15] in autonomous driving

  • The performance of the proposed method was evaluated by calculating the difference between extracted 3D vehicle trajectory results and ground-truth data

Read more

Summary

Introduction

With the development of intelligent transportation systems (ITS), it is possible to obtain a large amount of vehicle trajectory data that reflect the movement of vehicles on the road from fixed cameras [1]. Peng et al [18] proposed a method for extracting vehicle trajectories through CNNbased multi-object tracking in a nonoverlapping multi-camera scene and visualizing them on a satellite map through calculation with a homography matrix In this method, vehicle matching is performed using CNN features to obtain continuous vehicle trajectories, but it does not contain 3D physical information of vehicles. In complex traffic conditions such as crossroads and heavy traffic, it is difficult to accurately estimate a rotating trajectory or anomalous trajectory that does not match the traffic flow In this regard, our proposed method can obtain accurate 3D vehicle bounding boxes for all moving directions of vehicles, and it is robust to narrow visual angles and vehicle/obstacle occlusion under condition of low camera perspective. Using trained DCNN models, Sensors 2021, 21, x FOR PEER REVIEW

Comparison of different
Materials and Methodology
Framework
Single-camera
Trajectory Reconstruction and Overlapping Vehicle Matching
Experiments and Results
Dataset Labeling and Training Results
15.Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call