Abstract

Abstract. An accurate detection, classification, and tracking of vehicles are highly important for intelligent transport systems (ITS) and road maintenance. In recent years, the deep learning (DL)-based approach is highly regarded for real-time vehicle classification from surveillance cameras. However, the practical implementation of such an approach is affected by the adverse lighting conditions and positioning of the cameras. In this research, we develop a DL-based method for near real-time multi-vehicle counting, classifying, and tracking on individual lanes of the road. First, we train a DL network of the You Only Look Once (YOLO) family on a custom dataset that we have curated. The dataset consists of nearly 30000 training samples to classify the vehicles into seven classes, which is more than in the existing benchmark datasets. Second, we fine-tune the trained model into another small dataset collected from the surveillance cameras that are used during the implementation process. Third, we connect the trained model to a tracking algorithm that we have developed to produce a per-lane report with the calculation of the speed and mobility of the vehicles. We test the robustness of the system on different faces of the vehicles and in adverse lighting conditions. The overall accuracy (OA) of classification ranges from 91% to 99% in four faces of vehicles (back, front, driver side, and passenger side). Similarly, in an experiment on adverse lighting conditions, OA of 93.7% and 99.6% is observed in a noisy and clear lighting conditions respectively. The implications of these results will assist in road maintenance with spatial information management and sensing for intelligent transport planning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call