Abstract

Autonomous Vehicle (AV) technologies are faced with several challenges under adverse weather conditions such as snow, fog, rain, sun glare, etc. Object detection under adverse weather conditions is one of the most critical issues facing autonomous driving. Several state-of-the-art Convolutional Neural Network (CNN) based object detection algorithms have been employed in autonomous vehicles and promising results have been established under favorable weather conditions. However, results from the literature show that the accuracy and performance of these CNN-based object detectors under adverse weather conditions tend to diminish rapidly. This problem continues to raise major concerns in the research and automotive community. In this paper, the foggy weather condition is our case study. The goal of this work is to investigate how defogging and restoring the quality of foggy images can improve the performance of CNN-based real-time object detectors. We employed a Cycle consistent Generative Adversarial Network (CycleGAN)-based image fog removal technique [1] to defog, improve the visibility and the quality of the foggy images. We train our YOLOv3 algorithm using the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset [2]. Using the trained YOLOv3 network, we perform object detection on the original foggy images and restored images. We compare the performances of the object detector under no fog, moderate fog, and heavy fog conditions. Our results show that detection performance improved significantly under moderate fog and there was no significant improvement under heavy fog conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call