Abstract

Visual traffic surveillance using computer vision techniques can be noninvasive, automated and cost effective. Traffic surveillance systems with the ability to detect, count and classify vehicles can be employed in gathering traffic statistics and achieving better traffic control in intelligent transportation systems. This works well in daylight when the road users are clearly visible to the camera, but it often struggles when the visibility of the scene is impaired by insufficient lighting or bad weather conditions such as rain, snow, haze and fog. Therefore, in this paper, we design a dual input faster region-based convolutional neural network (RCNN) to make full use of the complementary advantages of color and thermal images to detect traffic objects in bad weather. Different from the previous detector, we used halfway fusion to fuse color and thermal images for traffic object detection. Besides, we adopt the polling from multiple layers method to adapt the characteristics of large size differences between objects of traffic targets to accurately identify targets of different sizes. The experimental results show that the present method improves the target recognition accuracy by 7.15% under normal weather conditions and 14.2% under bad weather conditions. This exhibits promising potential for implementation with real-world applications.

Highlights

  • A traffic surveillance camera system is an important part of an intelligent transportation system [1], which monitors traffic conditions and pedestrians by cameras mounted above the driveway.Surveillance video includes a lot of information [2], such as traffic flow, lane occupancy and vehicle type, that can be further processed by a computer to get real-time traffic conditions, accurate prediction and discrimination, to improve traffic congestion, accidents, environmental pollution and other issues.In recent years, with the development of computer vision, more and more algorithms are applied to the field of traffic surveillance [3,4,5]

  • In order to evaluate the performance of our proposed multispectral recognition method, we trained three other object detectors, including two faster region-based convolutional neural network (RCNN) models trained by color or thermal images only and a vanilla ConvNet

  • RCNN trained by visible images as F-rgb and the faster RCNN trained by thermal images as F-ther

Read more

Summary

Introduction

A traffic surveillance camera system is an important part of an intelligent transportation system [1], which monitors traffic conditions and pedestrians by cameras mounted above the driveway. With the development of computer vision, more and more algorithms are applied to the field of traffic surveillance [3,4,5] These methods improve the efficiency of road monitoring and let people get rid of boring and tedious work in front of the monitor [6]. The efficiency and accuracy of identification in these conditions are greatly reduced, which is unacceptable for traffic surveillance. To solve this problem, we propose a neural network to process color and thermal images collected by surveillance equipment and to obtain information about the images. The color image represents the visible light that is reflected or emitted from the object and background, it has high spatial resolution and sharp texture details [8].

Some the RGB
Object
Computer Vision for Traffic Surveillance Systems
Methods
Different
Network 1
Object Detector Model
Dataset and Data
Some the left left and and the the
Implementation Details
Experimental Results
Comparison
Discussion
In Proceedings
11. Semantic
15. Springer
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.