Advanced driver assistance systems primarily rely on visible images for information. However, in low-visibility weather conditions, such as heavy rain or fog, visible images struggle to capture road conditions accurately. In contrast, infrared (IR) images can overcome this limitation, providing reliable information regardless of external lighting. Addressing this problem, we propose an in-vehicle IR object detection system. We optimize the you only look once (YOLO) v4 object detection algorithm by replacing its original backbone with MobileNetV3, a lightweight feature extraction network, resulting in the MobileNetV3-YOLOv4 model. Furthermore, we replace traditional pre-processing methods with an Image Enhancement Conditional Generative Adversarial Network inversion algorithm to enhance the pre-processing of the input IR images. Finally, we deploy the model on the Jetson Nano, an edge device with constrained hardware resources. Our proposed method achieves an 82.7% mean Average Precision and a frame rate of 55.9 frames per second on the FLIR dataset, surpassing state-of-the-art methods. The experimental results confirm that our approach provides outstanding real-time detection performance while maintaining high precision.
Read full abstract