Abstract

Lane detection has been a research hot-spot these years for it is a key technique in autonomous driving. As lane detection requires real-time performance on car systems with limited computation ability and memorizer space, most algorithms employ small networks with higher speed but lower accuracy. These methods mostly rely on various distillation techniques or multi-task learning to enhance the overall precision, but few have exploited the direction context which is closely related to lane pixel distribution. In this paper, we tackle the problem of boosting lane detection performance by adding in direction context. For algorithm efficiency, we use the light-weight ERF-Net as our backbone network. Firstly, we incorporate spatial convolutions to propagate lane features directionally. Then we feed four supervision signals by a specially designed Cross-Sum Loss to train these features with rich direction context. We show that the proposed Cross-Sum Loss can enrich the direction sensitivity of lane features, improving both the precision and the interpretability of the lane detection network with negligible time and computation cost during inference. Besides, we also conduct a knowledge distillation step on the direction-sensitive features for a further precision gain. Extensive experiments and analyses conducted on the widely-used CULane dataset illustrate that the proposed method achieves state-of-the-art performance with real-time processing speed, demonstrating the effectiveness of adding in direction context with the Cross-Sum Loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call