Abstract
Research on integrating camera and LiDAR in self-driving car systems has important scientific significance in the context of developing 4.0 technology and applying artificial intelligence. The research contributes to improving the accuracy in recognizing and locating objects in complex environments. This is an important foundation for further research on optimizing response time and improving the safety of self-driving systems. This study proposes a real-time multi-sensor data fusion method, termed "Multi-Layer Fusion," for object detection and localization in autonomous vehicles. The fusion process leverages pixel-level and feature-level integration, ensuring seamless data synchronization and robust performance under adverse conditions. Experiments conducted on the CARLA simulator. The results show that the method significantly improves environmental perception and object localization, achieving a mean detection accuracy of 95% and a mean distance error of 0.54 meters across diverse conditions, with real-time performance at 30 FPS. These results demonstrate its robustness in both ideal and adverse scenarios
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have