Research on the Accuracy of Automatic Vision Algorithms for Classifying Traffic Lights
This article presents research concerning the recognition of road traffic lights. Initially, vision algorithms were analysed regarding their suitability for implementation in vehicle control systems dedicated to individuals with specific communication needs. The paper presents the results of experimental studies on a vision system for recognising traffic lights, conducted using convolutional neural networks (CNNs). For the experiment, a custom database of traffic light images was prepared. This database was utilised to train a selected Xception CNN model and for processing by a classic algorithm based on colour analysis in the HSV colour space. The obtained classification accuracy results, reaching 98.75%, could serve as a 'green light' for implementing the developed technology to assist driving. The research findings may also find application in driver assistance systems, with particular attention given to the mobility of people with specific needs, such as those with visual impairments.
- Book Chapter
2
- 10.1007/978-3-030-31760-7_6
- Oct 24, 2019
Use of autonomous vehicles aims to eventually reduce the number of motor vehicle fatalities caused by humans. Deep learning plays an important role in making this possible because it can leverage the huge amount of training data that comes from autonomous car sensors. Automatic recognition of traffic light and vehicle signal is a perception module critical to autonomous vehicles because a deadly car accident could happen if a vehicle fails to follow traffic lights or vehicle signals. A practical Traffic Light Recognition (TLR) or Vehicle Signal Recognition (VSR) faces some challenges, including varying illumination conditions, false positives and long computation time. In this chapter, we propose a novel approach to recognize Traffic Light (TL) and Vehicle Signal (VS) with high dynamic range imaging and deep learning in real-time. Different from existing approaches which use only bright images, we use both high exposure/bright and low exposure/dark images provided by a high dynamic range camera. TL candidates can be detected robustly from low exposure/dark frames because they have a clean dark background. The TL candidates on the consecutive high exposure/bright frames are then classified accurately using a convolutional neural network. The dual-channel mechanism can achieve promising results because it uses undistorted color and shape information of low exposure/dark frames as well as rich texture of high exposure/bright frames. Furthermore, the TLR performance is boosted by incorporating a temporal trajectory tracking method. To speed up the process, a region of interest is generated to reduce the search regions for the TL candidates. The experimental results on a large dual-channel database have shown that our dual-channel approach outperforms the state of the art which uses only bright images. Encouraged by the promising performance of the TLR, we extend the dual-channel approach to vehicle signal recognition. The algorithm reported in this chapter has been integrated into our autonomous vehicle via Data Distribute Service (DDS) and works robustly in real roads.
- Conference Article
1
- 10.1109/iicspi48186.2019.9096010
- Nov 1, 2019
Traffic light detection and recognition play an important role in Advanced Driver Assistance Systems and driverless cars. This paper proposed a new method based on spectral residual model and multi-feature fusion to solve the problem of traffic light recognition. First, the image acquired by the camera is converted to the LAB and HSV color space, and the A-channel and S-channel are used to obtain a saliency map through the spectral residual model. Secondly, using prior information of traffic light establish a task model for determining candidate area. Then extract the HOG features, LBP features, and RGB color features of the candidate areas, and after multi-feature fusion, the traffic light status is recognized by the SVM (Support vector machine) classifier. The experimental results show that the recognition rate of the algorithm reaches 96%, which can provide stable and accurate traffic light status information for driving vehicles.
- Research Article
- 10.52783/anvi.v28.3656
- Feb 3, 2025
- Advances in Nonlinear Variational Inequalities
Driving Assistance Systems (ADAS) have become integral to enhancing road safety and driver convenience in autonomous and semi-autonomous vehicles. One key component of ADAS is the accurate recognition of traffic signs and traffic lights to assist drivers in following road regulations and improving decision-making. This paper proposes a vehicular control system based on Computer Vision and Deep Learning techniques, designed to recognize traffic signs and lights and provide real-time driving assistance with distance estimation. The system employs Convolutional Neural Networks (CNNs) for traffic sign recognition and a hybrid detection model for traffic light detection, ensuring accuracy in dynamic environments. The proposed system was tested on publicly available datasets and demonstrated significant improvements in detection accuracy and response time, contributing to the development of safer and more efficient autonomous vehicles
- Research Article
16
- 10.1007/s12652-021-02900-y
- Jan 30, 2021
- Journal of Ambient Intelligence and Humanized Computing
Perceiving the information about ambient traffic lights is an inevitable task for autonomous vehicles. To deal with the issue, this work develops an accurate and fast traffic light recognition strategy for autonomous vehicles by an onboard camera. In this paper, deep learning based detection and object tracking is synthesized to determine the position and color of traffic lights. First, the mechanism of simultaneous detection and tracking is founded, wherein the video reading module, convolutional neural network (CNN) module, integrated channel feature tracking (ICFT) module are run simultaneously. Then, the respective modules of detection and tracking are introduced. CNN model is designed and trained to obtain the position of traffic lights utilized as initial information for tracking. ICFT is applied to continually track the traffic light targets and determine the light color. Finally, the effectiveness of the presented method is validated via comparing with the state of art. Experiments results indicate that the proposed technique can improve the accuracy and speed of recognition. Our contributions are: (1) Establish a mechanism for simultaneous detection and tracking of traffic lights; (2) Carefully design the CNN architecture and ICFT features; (3)The precision and recall rates on traffic lights recognition reached 0.962 and 0.909, respectively, and the recognition speed reached 21.4FPS (GPU: Nvidia Titan Xp).
- Book Chapter
4
- 10.1007/978-3-030-34113-8_50
- Jan 1, 2019
Traffic light recognition is crucial for the intelligent driving system. In the application scenarios, the environment of traffic lights is very complicated, due to different weather, distance and distortion conditions. In this paper, we proposed a Deep-learning based Traffic Light Recognition method, named DeTLR, which can achieve a reliable recognition precision and real-time running speed. Our DeTLR system consists of four parts: a skip sampling system, a traffic light detector (TLD), preprocessing, and a traffic light classifier (TLC). Our TLD combines MobileNetV2 and the Single Stage Detector (SSD) framework, and we design a small convolutional neural network for the TLC. To run our system in real-time, we develop a skip-frames technique and make up the delay of the time in the final response system. Our method could run well in complex natural situations safely, which benefits from both the algorithm and the diversity of the training dataset. Our model reaches a precision of 96.7% on green lights and 94.6% on red lights. The comparison to the one-step method indicates that our two-step method is better both in recall and precision, and running time’s difference is only about 0.7 ms. Furthermore, the experiments on other datasets (LISA, LaRA and WPI) show a good generalization ability of our model.
- Research Article
33
- 10.1007/s10111-015-0339-x
- May 1, 2015
- Cognition, Technology & Work
To develop a driver assistance system with the goal to increase driving efficiency, we aimed at understanding unassisted driving behaviour. With this knowledge, we will then be able to estimate the potential of the assistance system to support drivers in avoiding unnecessary deceleration and acceleration when approaching traffic lights and to estimate the amount of influence the driver assistance system could have on normal driving. Efficient driving was defined as driving behaviour that leads to reduced fuel consumption and emissions. In a driving simulator experiment with twelve participants and a within-subjects design, drivers approached intersections while the traffic light was either solid green or solid red, or changed from red to green or from green to red during the approach. In addition, we varied whether there was a lead vehicle present and manipulated visibility through the presence or absence of fog. Driving speed, acceleration and pedal usage were analysed and interpreted due to their relation with fuel consumptions and emissions, which is well known from the literature. Participants avoided strong accelerations and decelerations when approaching a solid green traffic light compared to a changing red to green traffic light. Speed was reduced earlier, when the traffic light was solid red compared to when the traffic light changed from green to red. Higher visibility in the non-fog conditions compared to the fog condition was only an advantage in terms of more efficient driving behaviour when the traffic light phase did not change during the approach. The potential for improvements in driving efficiency was higher when drivers were in free driving compared to when following a lead vehicle. We propose that approaching traffic light intersections takes place in three phases: an orientation, a preparation and a realisation phase. A driver assistance system is expected to improve drivers' anticipation of the driving scene and could recommend efficient driving behaviour in all three phases.
- Conference Article
12
- 10.1109/icce-tw.2014.6904063
- May 1, 2014
Given the rapid expansion of car ownership worldwide, vehicle safety is an increasingly critical issue in the automobile industry. The reduced cost of intelligent mobile phones has made it economically feasible to develop intelligent systems for visual-based event detection for forward collision avoidance and mitigation. In this work, a real-time traffic red light recognition is proposed under mobile platforms. The proposed method consists of real-time traffic lights localization via image down-sampling, circular regions detection and further traffic lights recognition. Hough Transform is modified to fast localize the traffic light candidates. Finally, a strong classifier is made from multiple weak features is employed for further verifications. In the experiment, the detection rate can achieve above 70%. This shows that our proposed traffic light recognition can be applied in real world environments.
- Conference Article
3
- 10.4271/2018-01-1620
- Aug 7, 2018
<div class="section abstract"><div class="htmlview paragraph">Traffic light detection has great significant for unmanned vehicle and driver assistance system. Meanwhile many detection algorithms have been proposed in recent years. However, traffic light detection still cannot achieve a desirable result under complicated illumination, bad weather condition and complex road environment. Besides, it is difficult to detect multi-scale traffic lights by embedded devices simultaneously, especially the tiny ones. To solve these problems, this paper presents a robust vision-based method to detect traffic light, the method contains main two stages: the region proposal stage and the traffic light recognition stage. On region proposal stage, we utilize lane detection to remove partial background from the original image. Then, we apply adaptive canny edge detection to highlight region proposal in Cr color channel, where red or green color proposals can be separated easily. Finally, extract the enlarged traffic light RoI (Region of Interest) to classify. On traffic light recognition stage, a tinny but effective convolution neural network (CNN), named TLRNet, classifies each traffic light RoI into its own class. In fact, deep learning (DL) is bad for detecting small object in many fields, so we use region proposal stage to get RoI and classification by CNN to achieve a good result. We validate our method both on Laboratory for Intelligent and Safe Automobiles (LISA) Traffic Lights Dataset and video sequences captured from Beijing’s streets. The experimental results prove that the proposed method can achieve a good result for the multi-scales traffic lights in the TX1 embedded platform, and reach a real-time performance at 28fps.</div></div>
- Conference Article
7
- 10.1145/3303714.3303726
- Dec 26, 2018
Traffic light recognition plays an important role in the field of intelligent vehicles for safe driving. Driving with intelligent vehicles has been demonstrated as a trend for the following years. However, numbers of difficulties are currently existing in traffic light recognition, such as, the appearance of traffic light, illumination, and the bad weather, etc. By showing the potential challenges in traffic light recognition, this paper introduces a real-time traffic light status recognition method based on the combination of YOLOv3 and a lightweight Convolutional Neural Network (CNN). YOLOv3 performs traffic light ROI detection, the lightweight CNN is responsible for classifying traffic light status.Two alternative methods are compared with this paper's method. We present an extensive evaluation on the BDDV dataset. Experimental results show that our method reach both high accuracy (98%) and less time consumption.
- Research Article
9
- 10.1038/s41598-023-31107-8
- Mar 10, 2023
- Scientific Reports
Car congestion is a pressing issue for everyone on the planet. Car congestion can be caused by accidents, traffic lights, rapid accelerations, deceleration, and hesitation of drivers, as well as a small low-carrying capacity road without bridges. Increasing road width and constructing roundabouts and bridges are solutions to car congestion, but the cost is significant. TLR (traffic light recognition) reduces accidents and traffic congestion caused by traffic lights (TLs). Image processing with convolutional neural network (CNN) lakes dealing with harsh weather. A semi-automatic annotation for traffic light detection employs a global navigation satellite system, raising the cost of automobiles. Data was not collected in harsh conditions, and tracking was not supported. Integrated channel feature tracking (ICFT) combines detection and tracking, but it does not support sharing information with neighbors. This study used vehicular ad-hoc networks (VANETs) for VANET traffic light recognition (VTLR). Information exchange as well as monitoring of the TL status, time remaining before a change, and recommended speeds are supported. Based on testing, it has been determined that VTLR performs better than semi-automatic annotation, image processing with CNN, and ICFT in terms of delay, success ratio, and the number of detections per second.
- Conference Article
5
- 10.1109/icivc47709.2019.8980828
- Jul 1, 2019
Detection and recognition of traffic lights is important for intelligent assisted driving. Traditional color space based traffic lights detection algorithms could be easily affected by other objects (such as buildings, car taillights) in the surrounding environment, and the detection accuracy and real-time performance are not ideal enough. Generally, the deep learning based methods have better advantages of real-time and accuracy performance for the normal scene with obvious traffic lights targets. However, the small traffic lights targets detection rate and accuracy in night-time of these methods are still can't be satisfactory. To solve this problem, this paper proposed a novel traffic lights detection and recognition algorithm based on multi-feature fusion, which can be implemented in two steps (detection and recognition). For the first step, the SLIC (simple linear iterative clustering) super-pixel segmentation algorithm is used for purposes reducing the image data processing complexity and improving the real-time performance. The mean-shift algorithm was used to cluster the HSV (Hue, Saturation, Value) color space components respectively for enhancing the target data and reducing the interference from other targets. For the second step, the feature information extracted by CNN (Convolutional Neural Network) and HOG(Histogram of Oriented Gradient) feature are fused. The SVM (Support Vector Machine) classifier is trained on a data set of traffic lights established by our own. To verify the proposed algorithm in this paper, amount of experiments were carried out in real traffic scenes. Experimental results show that this algorithm almost has the same real-time performance with YOLO_V3 neural network and a better accuracy.
- Conference Article
70
- 10.1109/ivs.2017.7995785
- Jun 1, 2017
Traffic Light Detection(TLD) and understanding their state semantics at intersections plays a pivotal role in driver assistance systems and, by extension, autonomous vehicles. Despite of several reliable traffic light state detection approaches in literature, traffic light state recognition still remains an open problem due to outdoor perception challenge which includes occlusions, illumination and scale variations. This paper presents a vision-based traffic light structure detection and convolutional neural network (CNN) based state recognition method, which is robust under different illumination and weather conditions. In the first step, traffic light candidate regions are generated by performing HSV based color segmentation, which are then filtered out using shape and area analysis. Further, in order to incorporate the structural information of traffic light in diverse background scenarios, Maximally Stable Extremal Region (MSER) approach is employed, which helps to localize the correct traffic light structure in the image. To further validate the traffic light candidate regions, Histogram of Oriented Gradients (HOG) features are extracted for each region and traffic light structures are validated using Support Vector Machine (SVM). The state of the traffic lights are then recognized using CNN. To evaluate the performance of the proposed method, we present several results under a variety of lighting conditions in a real-world environment. Experimental result shows that the proposed method outperforms other vision based conventional methods under varying light and weather conditions.
- Research Article
- 10.1142/s1016237207000380
- Jan 1, 2007
- Biomedical Engineering: Applications, Basis and Communications
Biomedical Engineering: Applications, Basis and CommunicationsVol. 19, No. 05, pp. 289-294 (2007) No AccessPEDESTRIAN TRAFFIC LIGHT RECOGNITION FOR THE VISUALLY IMPAIREDShun-Hsien Tsao, Yu-Luen Chen, Jhe-Jyu Luh, Jin-Shin Lai, Te-Son Kuo and Han-Shuan WuShun-Hsien TsaoDepartment of Electrical Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. Search for more papers by this author , Yu-Luen ChenDepartment of Computer Science, National Taipei University of Education, Taipei, Taiwan, R.O.C.Corresponding author: Dr Yu-luen Chen, Department of Computer Science, National Taipei University of Education Taiwan. Search for more papers by this author , Jhe-Jyu LuhSchool and Graduate Institute of Physical Therapy, National Taiwan University, Taiwan, R.O.C. Search for more papers by this author , Jin-Shin LaiDepartment of Physical Medicine and Rehabilitation, National Taiwan University Hospital, Taiwan, R.O.C. Search for more papers by this author , Te-Son KuoDepartment of Electrical Engineering, National Taiwan University, Taipei, Taiwan, R.O.C.Graduate Institute of Biomedical Engineering, National Taiwan University, Taiwan, R.O.C. Search for more papers by this author and Han-Shuan WuDepartment of Electrical Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. Search for more papers by this author https://doi.org/10.4015/S1016237207000380Cited by:0 PreviousNext AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack CitationsRecommend to Library ShareShare onFacebookTwitterLinked InRedditEmail AbstractIn this research, we employ the technology of computer vision to recognize traffic light. We manipulate the color information of traffic light under HSI color space. And then, adding motor tracking technology, we can trap the traffic light which is subjected to a region of interest (ROI) to the center area of monitor. Meanwhile, we assign different frequency (peach) according to red light or green light to inform the people what state the light is.We develop the algorithm at PC with MATLAB. Then, we port the whole system to DSP platform TI TMS320 DM642 EVM. The result is that in the condition of non-complex environment, the system can distinguish the red light from green light, and also can output audio signal by means of speaker.Keywords:Electronic sensory systems for the visually impairedComputer visionDSPPortableTraffic light References N. Satoshi and O. Hirohiko, IEEE (1999). Google ScholarA. Bruno, IEEE Inst. & Mea Mag. (2003). Google Scholar Soedaet al., Eng. Med. Biol. Soc. (2004). Google Scholar Kaneko et al. , Development of the navigation system for the visually impaired , Biomed Eng, IEEE EMBS Asian-Pacific Conference ( 2003 ) . Google Scholar DM642 Video Integration Workshop . Google Scholar A Multichannel Motion Detection System Using eXpressDSP RF5 NVDK Adaptation . Google Scholar Adapting the SPRA904 Motion Detection Application Report to the DM642 EVM . Google Scholar Reference Frameworks for eXpressDSP Software: RF5, An Extensive, High-Density System . Google Scholar Image Processing Examples Using the TMS320C64x Image/Video Processing Library (IMGLIB) . Google Scholar TMS320DM642 EVM Daughtercard Specification Revision 1.0 . Google Scholar Interfacing an LCD Controller to a DM642 Video Port . Google Scholar TM S320C64x Image/Video Processing Library Programmer's Reference . Google Scholar TMS320DM642 Technical Overview . Google Scholar C. Rulph , Digital Signal Processing and Applications with the C6713 and C6414 DSK ( Wiley-Interscience , 2004 ) . Google Scholar Adrian F., Alan R., Colour Space Conversions, http://www.poynton.com/PDFs/coloureq.pdf, 1998 . Google ScholarS. U. Mohammad and S. Tadayoshi, IEEE Com. Soc. Conf. Comp. Vis. Patt. Recogn. (2005). Google ScholarJ. D. Cutnell and K. W. , Physics, 5th edn. (Johnson, John Wiley & sons, 2000). Google Scholar http://www.nurion.net/Polaron.html . Google ScholarY.-C. Chung, J.-M. Wang and S.-W. Chen, J. Taiwan Norm. Uni: Mathematics, Sci. & Tech. 47(1), 67 (2002). Google ScholarB. Paoloet al., IEEE (2005). Google Scholar Z. Hunaiti et al. , Mobio link assessment of visually impaired navigation system , Instrumentation and Measurement Technology Conference ( 2005 ) . Google Scholar Mowat Sensor, http://www.as.wvu.edu/~scidis/terms/mowat.html . Google Scholar The NavBelt: A Computerized Travel Aid for the Blind, http://www-personal.umich.edu/~johannb/navbelt.htm . Google ScholarA. G. Miguel, A. S. Miguel and G. E. Martin, IEEE (2003). Google Scholar R. C. Gonzalez and R. E. Woods , Digital Image Processing , 2nd edn. ( Prentice Hall , New Jersey , 2001 ) . Google Scholar Remember to check out the Most Cited Articles! Notable Biomedical TitlesAuthors from Harvard, Rutgers University, University College London and more! FiguresReferencesRelatedDetails Recommended Vol. 19, No. 05 Metrics History Accepted 22 October 2007 KeywordsElectronic sensory systems for the visually impairedComputer visionDSPPortableTraffic lightPDF download
- Research Article
7
- 10.1177/03611981211016467
- Jul 12, 2021
- Transportation Research Record: Journal of the Transportation Research Board
Traffic light recognition is an important task for automatic driving support systems. Conventional traffic light recognition techniques are categorized into model-based methods, which frequently suffer from environmental changes such as sunlight, and machine-learning-based methods, which have difficulty detecting distant and occluded traffic lights because they fail to represent features efficiently. In this work, we propose a method for recognizing distant traffic lights by utilizing a semantic segmentation for extracting traffic light regions from images and a convolutional neural network (CNN) for classifying the state of the extracted traffic lights. Since semantic segmentation classifies objects pixel by pixel in consideration of the surrounding information, it can successfully detect distant and occluded traffic lights. Experimental results show that the proposed semantic segmentation improves the detection accuracy for distant traffic lights and confirms the accuracy improvement of 12.8 % over the detection accuracy by object detection. In addition, our CNN-based classifier was able to identify the traffic light status more than 30 % more accurately than the color thresholding classification.
- Research Article
2
- 10.12783/dtcse/cmee2016/5386
- Jan 25, 2017
- DEStech Transactions on Computer Science and Engineering
Traffic signal lights recognition system is an essential part of Advanced Driver Assistance Systems (ADAS). Methods for traffic lights recognition based on single feature and fixed threshold filtering are usually ineffective in complex background and variable lighting environment. To solve this problem, an approach based on features combination of color and shape of traffic lights is proposed, and the method of machine learning is used for recognition of traffic lights. On the basis of extracting the characteristic parameters of the candidate region, the SVM classifier is constructed to classify the traffic signal lights. Experimental results show that this method can realize the accurate location and recognition of traffic lights in complex scenes.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.