Abstract

Vehicle taillight detection is essential to analyze and predict driver intention in collision avoidance systems. In this article, we propose an end-to-end framework that locates the rear brake and turn signals from video stream in real-time. The system adopts the fast YOLOv3-tiny as the backbone model and three improvements have been made to increase the detection accuracy on taillight semantics, i.e., additional output layer for multi-scale detection, spatial pyramid pooling (SPP) module for richer deep features, and focal loss for alleviation of class imbalance and hard sample classification. Experimental results demonstrate that the integration of multi-scale features as well as hard examples mining greatly contributes to the turn light detection. The detection accuracy is significantly increased by 7.36%, 32.04% and 21.65% (absolute gain) for brake, left-turn and right-turn signals, respectively. In addition, we construct the taillight detection dataset, with brake and turn signals are specified with bounding boxes, which may help nourishing the development of this realm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call