Automated driving has gained significant attention because it can eliminate severe driving risks in real time. While autonomous vehicles rely heavily on sensors for lane detection, obstacle identification, and environmental awareness, accurate lane recognition remains a persistent challenge due to factors such as noise from shadows, poor lane markings, and obstructed views. Despite advances in computer vision, this problem is yet to be fully resolved, presenting a gap in the current literature. The primary objective of this research is to address these challenges by developing an enhanced lane-detection system. To achieve this, the study integrates advanced techniques, including semantic segmentation, edge detection, and deep learning, coupled with multi-sensor data fusion from cameras, LIDAR, and radar. By employing this methodology, the research examines various lane-detection methods and benchmarks the proposed model against existing systems in terms of accuracy, specificity and processing speed. Initial findings demonstrate that the combination of semantic segmentation and multi-sensor fusion improves lane detection in real-time scenarios. The proposed model achieved a lane detection accuracy of 97.8%, a specificity of 99.28%, and an average processing time of 0.0047 seconds per epoch. This study not only addresses the limitations of existing lane detection systems but also offers insights into improving road safety for autonomous vehicles.
Read full abstract