Abstract

Lane detection plays a crucial role in autonomous driving technology, and there have been many recent advancements in 3D lane detection methods. One common approach is simplifying the problem by transforming images into Bird's Eye View (BEV) space. However, existing methods still have limitations, particularly in accurately identifying lanes in many complex autonomous driving scenarios, such as slopes and extreme weather conditions. Therefore, this paper proposes a dynamically updated lane detection method called DynamicallyLane. This method utilizes ConvNeXt V2-N deformations as the backbone for feature extraction, employs dynamic learnable encoding and deformable attention mechanisms, achieves precise conversion from the frontal view to BEV through a novel representation of 3D reference points, and utilizes the Enhanced BEV Features module for feature fusion, obtaining richer and more semantically informative feature representations to facilitate the model learning process better. Experimental results demonstrate that DynamicallyLane achieves an F-score of 56.2 % on the OpenLane dataset and performs excellently on the Apollo 3D Synthetic dataset. Our code is available at https://github.com/Tafble/lane_detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.