Abstract

Accurate recognition of traffic lights in public roads is a critical step to deploy automated driving systems. Camera sensors are widely used for the object detection task. It might seem natural to employ them to traffic signal detection. However, images as captured by cameras contain a broad number of unrelated objects, causing a significant reduction in the detection accuracy. This paper presents an innovative, yet reliable method to recognize the state of traffic lights in images. With the help of accurate 3D maps and a self-localization technique in it, elements already being used in autonomous driving systems, we propose a method to improve the traffic light detection accuracy. Using the current location and looking for the traffic signals in the road, we extract the region related only to the traffic light (ROI, region of interest) in images captured by a vehicle-mounted camera, then we feed the ROIs to custom classifiers to recognize the state. Evaluation of our method was carried out in two datasets recorded during our urban public driving experiments, one taken during day light and the other obtained during sunset. The quantitative evaluations indicate that our method achieved over 97% average precision for each state and approximately 90% recall as far as 90 meters under preferable condition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call