Abstract

Traffic accident anticipation is essential for successful autonomous and assistive driving systems. Existing accident anticipation algorithms that mostly rely on visual features of the accident related objects involved provides both high AP (Average Precision) and TTA (Time to Accident). Despite a spatiotemporal relationship with the visual features of the accident related objects involved, these methods are often biased and therefore not well generalizable. In this paper, firstly we discuss dataset biases and then show that those high AP and TTA results came mainly from visual biases. Secondly, to overcome some of the visual biases, we propose a novel deep learning framework that uses both visual and geometric information of the accident-related objects captured in dash cam videos. Thirdly, we show effectiveness of the proposed method in terms of generalization capability compared to existing approaches with several open datasets from actual accident videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call