Autonomous vehicles necessitate the integration of advanced technologies such as computer vision and deep learning to comprehend and navigate their surroundings. A crucial yet challenging component of this integration is the accurate detection of lanes, which can be influenced by a multitude of varying lane characteristics and conditions. This research undertakes a comparative analysis of lane detection methodologies, explicitly focusing on traditional image processing techniques and Convolutional Neural Networks (CNNs). The evaluation utilized a sample of 500 images from the CULane dataset, which encompasses a diverse range of traffic scenarios. Initially, a method incorporating Gaussian blurring, Canny edge detection, and Hough line transformation was examined. Despite its efficiency, operating at 30 frames per second, this approach exhibited a high error rate (average Mean Squared Error (MSE) of 0.537), which is attributable to the loss of critical image details during the preprocessing stage. Subsequently, the performance of a fine-tuned YOLOv8 model, trained on a reformatted version of the CULane dataset was assessed. The combination of object detection and subsequent Hough transformation yielded high accuracy, demonstrating the model’s ability to learn and identify relevant lane features. The deep CNNs demonstrated superior performance over classical image processing techniques in terms of lane detection accuracy, thereby underscoring their potential applicability within the realm of autonomous vehicle technology
Read full abstract