Abstract

Autonomous driving depends on reliable perception systems that involve various perception modules and advanced computer vision techniques. One crucial component of these systems is lane detection, which traditional methods often rely on basic features like color or edges that are sensitive to lighting and perspective changes. Recently, convolutional neural networks (CNNs), have revolutionized lane detection. Nevertheless, existing methods still have some limitations, such as the need for pixel-level labeling and computational inefficiency for real-time applications. To address these challenges, this work leverages PP-LiteSeg for real-time semantic segmentation. PP-LiteSegs key elements are its Simple Pyramid Pooling Module (SPPM), Unified Attention Fusion Module (UAFM), and Flexible and Lightweight Decoder (FLD), which optimize lane detection efficiency. The FLD flexibly adjusts computational costs between the encoder and decoder, balancing efficiency and accuracy. The UAFM enhances feature representations using attention mechanisms, increasing segmentation accuracy. The SPPM efficiently aggregates contextual information while reducing computational complexity. The comprehensive method for lane segmentation achieves competitive results on popular lane detection datasets. The proposed model can adapt to different computational capabilities and significantly enhances lane detection efficiency for real-time applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call