Abstract

The use of multimodal sensors for lane line segmentation has become a growing trend. To achieve robust multimodal fusion, we introduced a new multimodal fusion method and proved its effectiveness in an improved fusion network. Specifically, a multiscale fusion module is proposed to extract effective features from data of different modalities, and a channel attention module is used to adaptively calculate the contribution of the fused feature channels. We verified the effect of multimodal fusion on the KITTI benchmark dataset and A2D2 dataset and proved the effectiveness of the proposed method on the enhanced KITTI dataset. Our method achieves robust lane line segmentation, which is 4.53% higher than the direct fusion on the precision index, and obtains the highest F2 score of 79.72%. We believe that our method introduces an optimization idea of modal data structure level for multimodal fusion.

Highlights

  • Reliable and robust lane line segmentation is one of the basic requirements of autonomous driving

  • We focus on lane line segmentation based on multiple sensor fusion

  • We propose a novel multimodal fusion lane line segmentation method based on multiscale convolution and channel attention mechanisms

Read more

Summary

Introduction

Reliable and robust lane line segmentation is one of the basic requirements of autonomous driving. While placing multimodal fusion in a high-dimensional space, algorithms based on 3D detection often require large computing resources, which are difficult to meet the needs of lightweight and real time in autonomous driving[12] For this reason, we propose a novel multimodal fusion lane line segmentation method based on multiscale convolution and channel attention mechanisms. Is article is organized as follows: in Section 2, we separately analyzed the current lane line segmentation algorithms based on camera images and point clouds and introduced the current status of the fusion method; in Section 3, we carried out the proposed method and network structure in detail; Section 4 discussed the processing of the dataset, as well as the experimental results and performance evaluation obtained after applying the proposed method; in Section 5, an ablation experiment was used to measure the contribution of each module in the proposed method; and, the proposed methods are summarized and future directions are provided. The main contributions of the article are as follows: (1) an idea of using multiscale convolution for multimodal fusion lane line segmentation is proposed; (2) ECANet[13] is used for the weight correction of the fusion feature channel, which effectively improves the accuracy of the lane line segmentation model; and (3) the proposed multiscale efficient channel attention(MS-ECA) can be widely used in the field of multimodal fusion and has good mobility

Related Work
Methods
Experiment
Findings
Method
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call