Abstract

In the traffic industry, the automatic accident detection system is a major concern. Although image-based and radar-based traffic accident detection systems are commonly employed, they have several drawbacks, including the need to secure the camera’s field of view, a high rate of false alarms, and a lengthy detection time. Using a real-time acoustic surveillance system and the classification algorithm via Convolutional Neural Network (CNN), this article proposes several methods for identifying abnormal situations, such as a car crash or tire skid sound, to overcome the limitations of existing methods. We create an audio database by collecting sounds from two tunnels in South Korea using self-made microphones for eight months and classifying them into three categories: car crash, tire skid, and normal environmental sounds. We establish a three-step classification procedure using an algorithm. We compare the detection rate and false alarm rate of our proposed method to those of deep learning techniques including MLP (Multi-Layer Perceptron), Long-Short Term Memory, ShuffleNetv2, and MobileNetv2. In addition, we present a method for filtering out irrelevant sound data to improve the computational efficiency of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call