Abstract
Autonomous vehicles ( AVs ) represent a significant technological advance poised to transform transportation by enhancing road safety, reducing traffic congestion, and reducing human error. Performance is largely a function of AVs ' ability to accurately interpret the environment, which is achieved by using a complicated method of cameras. Even though all of these cameras, like LiDAR, radar, cameras, and radar detectors, have advantages and disadvantages, no one system is capable of properly handling all driving conditions. To overcome these limitations, sensor fusion combines data from various cameras to create a detailed, reliable belief structure. This statement examines various sensor fusion techniques, identifies their limitations, and suggests a site for increased communication. This page combines probabilistic models and machine learning strategies, increasing the car's object detection, tracking, and choice-making abilities. Through style and genuine-world tests, the proposed model shows major improvements in sensor reliability, especially in adverse conditions like as bad weather or reduced visibility.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have