Abstract
Lane detection is a crucial visual perception task in the field of autonomous driving, serving as one of the core modules in advanced driver assistance systems (ADASs).To address the insufficient real-time performance of current segmentation-based models and the conflict between the demand for high inference speed and the excessive parameters in resource-constrained edge devices (such as onboard hardware, mobile terminals, etc.) in complex real-world scenarios, this paper proposes an efficient and lightweight auxiliary branch network (CBGA-Auxiliary) to tackle these issues. Firstly, to enhance the model’s capability to extract feature information in complex scenarios, a row anchor-based feature extraction method based on global features was adopted. Secondly, employing ResNet as the backbone network and CBGA (Conv-Bn-GELU-SE Attention) as the fundamental module, we formed the auxiliary segmentation network, significantly enhancing the segmentation training speed of the model. Additionally, we replaced the standard convolutions in the branch network with lightweight GhostConv convolutions. This reduced the parameters and computational complexity while maintaining accuracy. Finally, an additional enhanced structural loss function was introduced to compensate for the structural defect loss issue inherent in the row anchor-based method, further improving the detection accuracy. The model underwent extensive experimentation on the Tusimple dataset and the CULane dataset, which encompass various road scenarios. The experimental results indicate that the model achieved the highest F1 scores of 96.1% and 71.0% on the Tusimple and CULane datasets, respectively. At a resolution of 288 × 800, the ResNet18 and ResNet34 models achieved maximum inference speeds of 410FPS and 280FPS, respectively. Compared to existing SOTA models, it demonstrates a significant advantage in terms of inference speed. The model achieved a good balance between accuracy and inference speed, making it suitable for deployment on edge devices and validates the effectiveness of the model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.