Abstract

The exponential increase in the use of technology in road management systems has led to real-time visual information in thousands of locations on road networks. A previous step in preventing or detecting accidents involves identifying vehicles on the road. The application of convolutional neural networks in object detection has significantly improved this field, enhancing classical computer vision techniques. Although, there are deficiencies due to the low detection rate provided by the available pre-trained models, especially for small objects. The main drawback is that they require manual labeling of the vehicles that appear in the images from each IP camera located on the road network to retrain the model. This task is not feasible if we have thousands of cameras distributed across the extensive road network of each nation or state. Our proposal presented a new automatic procedure for detecting small-scale objects in traffic sequences. In the first stage, vehicle patterns detected from a set of frames are generated automatically through an offline process, using super-resolution techniques and pre-trained object detection networks. Subsequently, the object detection model is retrained with the previously obtained data, adapting it to the analyzed scene. Finally, already online and in real-time, the retrained model is used in the rest of the traffic sequence or the video stream generated by the camera. This framework has been successfully tested on the NGSIM and the GRAM datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call