Abstract

Current works of environmental perception for connected autonomous electrified vehicles (CAEVs) mainly focus on the object detection task in good weather and illumination conditions, they often perform poorly in adverse scenarios and have a vague scene parsing ability. This paper aims to develop an end-to-end sharpening mixture of experts (SMoE) fusion framework to improve the robustness and accuracy of the perception systems for CAEVs in complex illumination and weather conditions. Three original contributions make our work distinctive from the existing relevant literature. The Complex KITTI dataset is introduced which consists of 7481 pairs of modified KITTI RGB images and the generated LiDAR dense depth maps, and this dataset is fine annotated in instance-level with the proposed semi-automatic annotation method. The SMoE fusion approach is devised to adaptively learn the robust kernels from complementary modalities. Comprehensive comparative experiments are implemented, and the results show that the proposed SMoE framework yield significant improvements over the other fusion techniques in adverse environmental conditions. This research proposes a SMoE fusion framework to improve the scene parsing ability of the perception systems for CAEVs in adverse conditions.

Highlights

  • 1.1 Motivations and Technical Challenges Connected autonomous electrified vehicles (CAEVs) offer high potential to improve road safety, boost traffic efficiency and minimize carbon emissions [1], as well as reduce vehicle wear, transportation times and fuel consumption [2, 3]

  • The single modal approach and several different fusion architectures are compared on the modified KITTI dataset, and the results demonstrate the proposed sharpening mixture of experts (SMoE) fusion network can significantly improve the accuracy and robustness of instance segmentation in complex illumination and weather conditions

  • 2.2.2 Fusion Methods Deep neural networks offer a wide range of choices to fuse the multi-modal features at different stages due to their hierarchical nature

Read more

Summary

Introduction

1.1 Motivations and Technical Challenges Connected autonomous electrified vehicles (CAEVs) offer high potential to improve road safety, boost traffic efficiency and minimize carbon emissions [1], as well as reduce vehicle wear, transportation times and fuel consumption [2, 3]. In recent years, visionbased state-of-the-art deep neural network models [6,7,8,9]. To this end, perception systems [10] in CAEVs usually exploit the complementary and comprehensive information from multi-modal sensors like vision cameras, LiDARs and Radars to accurately perceive the surrounding traffic conditions. LiDARs offer accurate 3D information of the surroundings in the form of point cloud by emitting and receive laser beams.

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.