To better control traffic and promote environmental sustainability, this study proposed a framework to monitor vehicle number and velocity at real time. First, You Only Look Once-v4 (Yolo-v4) algorithm based on deep learning can greatly improve the accuracy of object detection in an image, and trackers, including Sort and Deepsort, resolved the identity switch problem to track efficiently the multiple objects. To that end, this study combined Yolo-v4 with Sort and Deepsort to develop two trajectory models, which are known as YS and YDS, respectively. In addition, different regions of interest (ROI) with different pixel distances (PDs), named ROI-10 and ROI-14, were converted by road marking to calibrate the PD. Finally, a high-resolution benchmark video and two real-time low-resolution videos of highway both were employed to validate this proposed framework. Results show the YDS with ROI-10 achieved 90% accuracy of vehicle counting, when compared to the number of actual vehicles, and this outperformed the YS with ROI-10. However, the YDS with ROI-14 generated relatively good estimates of vehicle velocity. As shown in the real-time low-resolution videos, the YDS with ROI-10 achieved 89.5% and 83.7% accuracy of vehicle counting in Nantun and Daya sites of highway, respectively, and reasonable estimates of vehicle velocity were obtained. In the future, more bus and light truck images could be collected to effectively train the Yolo-v4 and improve the detection of bus and light truck. A better mechanism for precise vehicle velocity estimation and the vehicle detection in different environment conditions should be further investigated.
Read full abstract