Abstract

Abstract Environment perception plays a significant role in autonomous driving since all traffic participants in the vehicle’s surroundings must be reliably recognized and localized in order to take any subsequent action. The main goal of this paper is to present a neural network approach for fusing camera images and LiDAR point clouds in order to detect traffic participants in the vehicle’s surroundings more reliably. Our approach primarily addresses the problem of sparse LiDAR data (point clouds of distant objects), where due to sparsity the point cloud based detection might become ambiguous. In the proposed model each 3D point in the LiDAR point cloud is augmented by semantically strong image features allowing us to inject additional information for the network to learn from. Experimental results show that our method increases the number of correctly detected 3D bounding boxes in sparse point clouds by at least 13–21 % and thus raw sensor fusion is validated as a viable approach for enhancing autonomous driving safety in difficult sensory conditions.

Highlights

  • The approach proposed in this paper extends and modifies the F-PointNet model by assigning local – yet semantically strong – image features from high resolution feature maps to each point in the LiDAR point cloud

  • Our experiments indicate that low-level camera and sparse LiDAR data fusion is a viable option for improving perception in self-driving applications, where safety considerations play a central role

  • In this paper we propose a possible improvement for existing state of the art neural network architectures that consider camera and LiDAR

Read more

Summary

Introduction

Raw fusion of camera and sparse LiDAR for detecting distant objects main approaches to multi-sensor integration are objectlevel and low-level (raw) data fusion. Our contribution lies in the novel method of point cloud augmentation that accomplishes a meaningful data fusion between camera and LiDAR features and allows a much closer degree of sensor integration than previous point cloud based approaches. Our results show that it is possible to achieve more accurate and more reliable perception by utilizing appropriate methods of data fusion and leveraging the synergies hidden in the statistical association between streams of data provided by multiple sensors that simultaneously measure the same environment. The practical significance of our result lies in the possibility of more reliably detecting distant traffic participants or obstacles and achieving a higher level of safety for autonomous driving

Related work
Problem definition
Main contribution
Developed model
Experiment setup and results
Findings
Summary
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.