Abstract

Accurate lane position prediction is crucial in autonomous driving for safe vehicle maneuvering. Monocular cameras, aided by AI advancements, have proven to be effective in this task. However, 2D image space predictions overlook lane height, causing poor results in uphill or downhill scenarios that affect action judgments, such as in the planning and control module. Previous 3D-lane detection approaches relied solely on applying Inverse Perspective Mapping (IPM) on the encoded camera feature map, which may not be ordered according to the perspective principle leading to sub-optimal prediction results. To address these issues, we present the LS-3DLane network, inspired by the Lift-Splat-Shoot architecture, which predicts lane position in 3D space using a data-driven approach. The network also employs the Parallelism loss, using prior knowledge of lane geometry, to improve performance. Such loss can be used for training any 3D lane position prediction network and would boost the performance. Our results show that LS-3DLane outperforms previous approaches like Gen-LaneNet and 3D-LaneNet, with F-score improvements reaching 5.5% and 10%, respectively, in certain cases. LS-3DLane performs similarly in X/Z error metrics. Parallelism loss was shown to boost the F-Score KPI when applied to any of the models under test (LS-3DLane, GenLaneNet, and 3D-LaneNet) by up to 2.8% in certain cases and has a positive impact on nearly all the other KPIs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call