Lane detection, which relies on front-view RGB cameras, is a crucial aspect of Advanced Driver Assistance Systems (ADAS), but its effectiveness is notably reduced in low-light conditions. This issue is exacerbated by the lack of specialized datasets and generalizable methods for such scenarios. To address this gap, we introduce NightLane, a comprehensive dataset tailored for low-light, multi-traffic lane detection. We adhere to stringent data annotation guidelines, ensuring reliable detection accuracy. Additionally, we propose the Fused Low-Light Enhancement Framework (FLLENet), which leverages modern detection networks and incorporates a low-light enhancement module and attention mechanisms. The enhancement module, based on zero-reference learning, improves image quality and channel richness, while the attention mechanisms effectively extract and utilize these features. Our extensive testing on NightLane and CULane datasets demonstrates superior performance in low-light lane detection, showcasing FLLENet’s robust generalizability and efficacy. Specifically, our approach achieves an F1 measure of 76.90 on CULane and 78.91 on NightLane, demonstrating its effectiveness against state-of-the-art methods. We also evaluate the real-time applicability of our framework on a low-power embedded lane detection system using NVIDIA Jetson AGX/Orin, achieving high accuracy and real-time performance. Our work offers a new approach and reference in the field of low-light lane detection, potentially aiding in the ongoing enhancement of ADAS (ADAS). Dateset are available at https://github.com/pengjingt/FLLENet.
Read full abstract