Abstract

Numerous accidents can be avoided if drivers are alerted just a few seconds before a collision. However, collision prediction is challenging because of high computational loads, complex background clutter, and nonstationary sensors. Active sensors, such as ultrasonic, radar, and laser, are expensive and can pose interference problems in heavy traffic. Thus, this paper explores the possibility of a visual collision-warning system solely using a single dashboard camera that is currently widely available and easy to install. Existing vision-based collision-warning systems focus on detecting specific targets, such as pedestrians, vehicles, and bicycles, based on statistical models trained in advance. Instead of relying on these prior models, the proposed system aims at detecting the general motion patterns of any approaching object. Considering the fact that all motion vectors of projecting points on an approaching object diverge from a point called focus of expansion (FOE), we construct a cascade-like decision tree to filter out false detections in the earliest possible stage and develop a multiple FOE segmentation algorithm to classify optical flows to distinct originating objects based on their individual FOEs. Further analysis is performed on objects in a high-risk area called the danger zone. Tracking steadiness is examined, and the time-to-collision (TTC) is estimated to evaluate collision risks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call