Abstract

In urban cities, visual information on and along roadways is likely to distract drivers and lead to missing traffic signs and other accident-prone (AP) features. To avoid accidents due to missing these visual cues, this paper proposes a visual notification of AP-features to drivers based on real-time images obtained via dashcam. For this purpose, Google Street View images around accident hotspots (areas of dense accident occurrence) identified by a real-accident dataset are used to train a novel attention module to classify a given urban scene into an accident hotspot or a non-hotspot (area of sparse accident occurrence). The proposed module leverages channel, point, and spatial-wise attention learning on top of different CNN backbones. This leads to better classification results and more certain AP-features with better contextual knowledge when compared with CNN backbones alone. Our proposed module achieves up to 92% classification accuracy. The capability of detecting AP-features by the proposed model were analyzed by a comparative study of three different class activation map (CAM) methods, which are used to inspect specific AP-features causing the classification decision. The outputs of the CAM methods were processed by an image processing pipeline to extract only the AP-features that are explainable to drivers and notified using a visual notification system. Range of experiments was performed to prove the efficacy and AP-features of the system. Ablation of the AP-features taking 9.61%, on average, of the total area in each image sample increased the chance of a given area to be classified as a non-hotspot by up to 21.8%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call