Abstract

Due to the increase in motor vehicle accidents, there is a growing need for high-performance car crash detection systems. The authors of this research propose a car crash detection system that uses both video data and audio data from dashboard cameras in order to improve car crash detection performance. While most existing car crash detection systems depend on single modal data (i.e., video data or audio data only), the proposed car crash detection system uses an ensemble deep learning model based on multimodal data (i.e., both video and audio data), because different types of data extracted from one information source (e.g., dashboard cameras) can be regarded as different views of the same source. These different views complement one another and improve detection performance, because one view may have information that the other view does not contain. In this research, deep learning techniques, gated recurrent unit (GRU) and convolutional neural network (CNN), are used to develop a car crash detection system. A weighted average ensemble is used as an ensemble technique. The proposed car crash detection system, which is based on multiple classifiers that use both video and audio data from dashboard cameras, is validated using a comparison with single classifiers that use video data or audio data only. Car accident YouTube clips are used to validate this research. The experimental results indicate that the proposed car crash detection system performs significantly better than single classifiers. It is expected that the proposed car crash detection system can be used as part of an emergency road call service that recognizes traffic accidents automatically and allows immediate rescue after transmission to emergency recovery agencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call