Abstract

Intelligent vehicles utilize a combination of video-enabled object detection and radar data to traverse safely through surrounding environments. However, since the most momentary missteps in these systems can cause devastating collisions, the margin of error in the software for these systems is small. Furthermore, extenuating weather conditions such as rain, snow, and fog exponentially increase the likelihood of accidents by reducing visibility and increasing the time for detection. In this paper, we hypothesized that a novel object detection system that improves detection accuracy and speed of detection during adverse weather conditions would outperform industry alternatives in an average comparison. To do so, the model employs multiple classical deep learning techniques in two separate sub-modules: a Visibility Correction Module (VCM) and an Object Detection Module (ODM). Firstly, the model employs image classification techniques and masking to identify environmental factors frame-by-frame within an image, and then uses a novel dimensionality reduction network to remove said effects. Next, corrected images are analyzed to classify and label objects within frames. The proposed algorithm achieved an average accuracy of 89.72%, and outperformed industry alternatives in mean accuracy and time for detection, demonstrating the validity and efficiency of utilizing dimensionality reduction to improve object detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call