Abstract. In the field of autonomous driving, object detection in low-light conditions presents a critical challenge. Traditional detection algorithms, reliant on RGB imagery, often suffer from diminished performance under inadequate illumination. In contrast, RAW images, which retain a higher level of intrinsic image information, have the potential to enhance detection accuracy. Therefore, the paper aims to assess whether models trained on RAW images exhibit enhanced object detection performance in low-light scenarios. To this end, experiments are conducted utilizing the Low-light Object Detection (LOD) dataset, which includes paired RAW and RGB images, with variations in lighting simulated through different exposure times. Two distinct datasets are constructed: one consisting of RAW-normal and RAW-dark images, and another comprising RGB-normal and RGB-dark images. The models evaluated in this research include YOLOv8, Faster R-CNN (Region-based Convolutional Neural Network), and EfficientDet, selected to ensure robustness and generalizability of the findings. The results demonstrate that models trained on RAW images significantly outperform those trained on RGB images, thus exhibiting higher accuracy both in general and under low-light conditions. The preservation of detailed image information in RAW format facilitates enhanced feature extraction, thereby improving detection accuracy and resilience under low-light conditions. These findings provide novel insights into advancing object detection methodologies for autonomous driving systems operating in challenging lighting environments.
Read full abstract