Abstract

AbstractSpatiotemporal information of the vehicles on a bridge is important evidence for reflecting the stress state and traffic density of the bridge. A methodology for obtaining the information is proposed based on computer vision technology, which contains the detection by Faster region‐based convolutional neural network (Faster R‐CNN), multiple object tracking, and image calibration. For minimizing the detection time, the ZF (Zeiler & Fergus) model with five convolutional layers is selected as the shared part between Region Proposal Network and Fast R‐CNN in Faster R‐CNN. An image data set including 1,694 images is established about eight types of vehicles for training Faster R‐CNN. Combined with the detection of each frame of the video, the methods of multiple object tracking and image calibration are developed for acquiring the vehicle parameters, including the length, number of axles, speed, and the lane that the vehicle is in. The method of tracking is mainly based on the judgment of the distances between the vehicle bounding boxes in virtual detection region. As for image calibration, it is based on the moving standard vehicles whose lengths are known, which can be regarded as the 3D template to calculate the vehicle parameters. After acquiring the vehicles' parameters, the spatiotemporal information of the vehicles can be obtained. The proposed system has a frame rate of 16 fps and only needs two cameras as the input device. The system is successfully applied on a double tower cable‐stayed bridge, and the identification accuracies of the types and number of axles are about 90 and 73% in the virtual detection region, and the speed errors of most vehicles are less than 6%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call