Abstract

Lane detection, a crucial component of autonomous driving systems, is in charge of precise lane location to ensure that cars navigate lanes appropriately. However, in challenging conditions like shadows and extreme lighting, lanes may become obstructed or blurred, posing a significant challenge to the lane-detection task as the model struggles to extract sufficient visual information from the image. The current anchor-based lane-detection network detects lanes in complex scenes by mapping anchors to images to extract features and calculating the relationship between each anchor and other anchors for feature fusion. However, it is insufficient for anchors to extract subtle features from images, and there is no guarantee that the information carried by each anchor is valid. Therefore, this study proposes the adaptive cross-scale ROI fusion network (ACSNet) to fully extract the features in the image so that the anchor carries more useful information. ACSNet selects important anchors in an adaptive manner and fuses these important anchors with the original anchors across scales. Through this feature extraction method, the features of different field-of-view ranges under complex road surfaces can be learned, and diversified features can be integrated to ensure that lanes can be well detected under complex road surfaces such as shadows and extreme lighting. Furthermore, due to the slender structure of lane lines, there are relatively few useful features in the images. Therefore, this study also proposes a Three-dimensional Coordinate Attention Mechanism (TDCA) to enhance image features. The Three-dimensional Coordinate Attention Mechanism extensively explores relationships among features in the row, column, and spatial dimensions. It calculates feature weights for each of these dimensions and ultimately performs element-wise multiplication with the entire feature map. Experimental results demonstrate that our network achieves excellent performance on the existing public datasets, CULane and Tusimple.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call