Perception to Motion: The Role of Multi-Sensor Data Fusion in Realizing the Full Potential of Autonomous Vehicles
Multi-source and heterogeneous information fusion (MSHIF) is a critical approach for enhancing the performance of autonomous vehicles (AVs), particularly in environmental perception and decision-making. This review discusses the potential of AVs in reducing carbon emissions and traffic flow through the revolution of transportation systems. Various types of sensors in AV systems are determined in this review. They are cameras, LiDAR, MMW-Radar, and GPS/IMU modules. Multiple fusion algorithms are employed to harness the full potential of the sensors, such as Kalman filtering, particle filters, and Bayesian networks. These sensors significantly enhance the accuracy and reliability of AV operations; however, addressing their inherent challenges and exploring future research directions in the AV domain are essential. AVs require real-time data processing so that rapid decision-making can be made to handle the dynamic environments. It is also crucial to be concerned about the advancements in computational efficiency and algorithmic sophistication. Cybersecurity emerges as another critical concern, given the increasing connectivity of AVs to external networks. Besides that, the integration of blockchain technology is also addressed in this review to enhance security measures and facilitate transparent data sharing among AV stakeholders. Last but not least, ethical considerations surrounding AI-driven decision-making in AVs are also discussed because human safety needs to be prioritized for establishing ethical guidelines. Further studies and development for AVs could focus on sensor fusion techniques, cybersecurity, and ethical frameworks. The advancements will not only enhance the safety and reliability of AV systems but also pave the way for their widespread adoption in future transportation ecosystems.
- Conference Article
38
- 10.1109/dsn-w.2017.14
- Jun 1, 2017
Massive investment in 'intelligent' vehicle technologies is going to turn autonomous vehicles into reality in a few years. The insertion of this intelligence at the road vehicles is expected to cause a reduction in traffic accidents due to the mitigation of human drivers errors and imperfections by computerized autopilots. However, autonomous vehicles shall mitigate the existing hazards at the roadway transportation systems while not creating new hazards. Thus, some critical aspects need to be better considered, such as how to ensure safety in this new vehicle paradigm. There is no specific method to analyze and assure the safety levels of the autonomous vehicle system. Despite the ISO 26262 - a new safety standard that specifies requirements and activities throughout the road vehicles development lifecycle - it cannot be applied to the autonomous road vehicles scope. This paper proposes a design strategy that may be used at the architecture design level of autonomous vehicles that may facilitate the development, analysis and, consequently, safety level assuring. The main idea is to implement an independent module - the Autonomous Vehicle Control (AVC) - that is going to both interact with the vehicle's systems and create a protection layer that is independent of the way the vehicle's system was developed. So, the AVC could be used with any autonomous vehicle system and could be tested individually. This strategy is based on both recommended practices published by Society of Automotive Engineers (SAE) and on approaches used on other transportation system domains. Another important point is that the proposed module will be intended, in principle, for fully autonomous cars (high levels of driving automation). So, it is expected that, in the future, the proposed module can be used to develop a safety software standard or to suit the existing ones to the needs of autonomous road vehicles.
- Conference Article
2
- 10.1109/itsc.2018.8569715
- Nov 1, 2018
Existing costs show that on average, Americans are spending approximately $.54 per mile for driving individually-owned (IO) vehicles. This value is based on factors such as fuel, maintenance, insurance, registration, parking, tolls and depreciation. Many of these factors will be affected with the arrival of autonomous vehicles (AVs) and more specifically, Shared Autonomous Vehicle (SAV) systems. For the purposes of realizing the advantages of future AVs and SAVs, a better understanding of future costs of travel is necessary. The goal of this research work is to estimate future travel costs using each of the AV and SAV Systems. These cost estimates would be crucial in evaluating the impacts of these technologies on future mobility and society. For this purpose, extensive literature reviews were conducted and a model was developed to analyze and evaluate the future travel costs using each of the AV and SAV systems. Results of this work indicate that future travel costs using SAV systems are likely to be substantially lower than those of using AV systems. This may lead to a disruption that can ultimately influence consumers to switch from owning a vehicle to on-demand mobility services.
- Research Article
49
- 10.1007/s12083-021-01073-x
- Feb 24, 2021
- Peer-to-Peer Networking and Applications
For the past few years, the automation of transportation becomes a hot research topic for smart cities. Intelligent Transportation System (ITS) aims to manage and optimize the traffic congestion, road accidents, parking allocation using Autonomous Vehicles (AV) system, where the AVs are internally connected for message passing and critical decision making in time-sensitive applications. The data security in such applications can be offered using Blockchain (BC) technology. But, as per the existing literature, there exists no system which can call AVs automatically based on the situation, i.e., call an ambulance in case of an accident, call logistic service in case of home transfer, and call the traffic department in case of traffic jam. Motivated from the aforementioned reasons, in this article, we propose a BC-based secure and intelligent sensing and tracking architecture for AV system using beyond 5G communication network. Recently, AVs are facing issues with sensing and tracking technology as well as the data thefts. AV system contains sensitive information and transfers it through a communication channel to Connected AVs (CAVs), where the corrupted information or delay of a fraction of a second can lead to a critical situation. So, here we present possib the attacks and safety countermeasures using BC technology to protect the AV system. The proposed architecture ensures secure sensing and tracking of an object through BC by deploying AI algorithms at the edge servers. Also, the beyond 5G network enables communications with low latency and high reliability to meet the desires of the aforementioned time-sensitive applications. The proposed system is evaluated by considering the parameters as mobility and data transfer time against the traditional LTE-A and 5G communication networks. The proposed system outperforms traditional systems and can be suitable for diverse applications where latency, reliability, and security are the prime concerns.
- Conference Article
- 10.4271/2022-01-0362
- Mar 29, 2022
<div class="section abstract"><div class="htmlview paragraph">Spurred by the constraints of the COVID-19 pandemic, virtual testing is becoming an increasingly essential method for verification and validation of autonomous ground vehicle simulation tools. The Mobility Systems Branch (MSB) of the US Army Corps of Engineers Engineering Research and Development Center (ERDC) Geotechnical and Structures Laboratory (GSL) has developed a new approach in physics-based virtual testing of autonomous ground vehicle systems through the incorporation of both qualitative and quantitative data in congruency with ERDC’s Software-in-the-Loop laboratory. Virtual testing of autonomous vehicles combines simulation tools consisting of vehicle and sensor models represented in a virtual scene with both performer observations and modeling and simulation observations. The first iteration of a Virtual Engineering Evaluation Test (V-EET) for robotic and autonomous ground vehicle systems took place in 2021 at the ERDC in Vicksburg, Mississippi. Virtual testing took place over the course of several months with remote researchers participating from across the country. Researchers used a combination of quantitative and qualitative methods to identify discrepancies between traditional field Engineering Evaluation Test data collection and V-EET data collection. Also identified were issues within testing protocols and difficulties associated with overwhelming and complex data sets. Building on findings, researchers developed a new virtual testing framework that addressed these issues and included more versatility. This new framework included streamlined and efficient data collection and analysis, standardization of observation collection techniques, and objectification of qualitative data to be used across relevant products with visual or human components. This will then provide the most efficient and robust products possible and improve situational awareness for autonomous vehicle assessment in complex on- and off-road environments.</div></div>
- Research Article
- 10.54254/2755-2721/52/20241470
- Mar 27, 2024
- Applied and Computational Engineering
The perception and navigation of autonomous vehicles heavily rely on the utilization of sensor technology and the integration of sensor fusion techniques, which play an essential role in ensuring a secure and proficient understanding of the vehicle's environment.This paper highlights the significance of sensors in autonomous vehicles and how sensor fusion techniques enhance their capabilities. Firstly, the paper introduces the different types of sensors commonly used in autonomous vehicles and explains their principles of operation, strengths, and limitations in capturing essential information about the vehicles environment. Next, the paper discusses various sensor fusion algorithms, such as Kalman filters and particle filters. Furthermore, the paper explores the challenges associated with sensor fusion and addresses the issue of handling sensor failures or uncertainties. The benefits of sensor fusion technology in autonomous vehicles are also presented. These include improved perception of the environment, enhanced object recognition and tracking, better trajectory planning, and enhanced safety through redundancy and fault tolerance. Lastly, the paper discusses the advancements and highlights the integration of artificial intelligence and machine learning techniques to optimize sensor fusion algorithms and improve the overall autonomy of the vehicle. Following thorough analysis, the deduction can be made that sensor and sensor fusion technology assume a critical function in facilitating efficient and secure autonomous vehicle navigation within intricate surroundings.
- Book Chapter
- 10.1007/978-3-030-50936-1_109
- Jan 1, 2020
The article describes the concept of the control system for the prototype EVE autonomous electric vehicle. The developed design of the control system for an autonomous vehicle consists of several subsystems whose correct operation, appropriate data exchange, and properly selected and configured low-level and high-level control algorithms allow its implementation in autonomous driving tasks. The article described individual subsystems of the control scheme, such as: the measurement system, low-level and high-level control system, and also the safety system which guarantees safe operation of the EVE autonomous vehicle.
- Research Article
66
- 10.1016/j.compenvurbsys.2021.101686
- Jul 22, 2021
- Computers, Environment and Urban Systems
A systematic review of agent-based models for autonomous vehicles in urban mobility and logistics: Possibilities for integrated simulation models
- Research Article
116
- 10.1177/1548512915575803
- Mar 11, 2015
- The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology
Technological developments towards vehicle automation have been taking place for years. Satellite navigation, cruise control and anti-lock braking systems (ABS) are well-known examples of automation technology used in everyday cars. The trend of automation technology being used in vehicles is expected to move closer to full autonomy through technological advancements in robotics and artificial intelligence. As a result, our daily lives will become more and more dependent on digitally-controlled devices and vehicle systems (partially or highly automated). On the other hand, computing and digital systems also have a tendency to become more fragile and susceptible to faults and failures because of cyber-attacks and software and hardware defects, as well as accidental defects introduced by developers. Therefore, autonomous vehicle systems must be developed to bear such dangers in mind and must be equipped with defensive capabilities and measures such that they can be able to respond automatically and dynamically to both accidental and deliberate defects and attacks. In this study, autonomous and unmanned vehicle systems are examined for their cyber-security vulnerabilities. Threats and attacks exploiting these vulnerabilities are identified and classified. Development guidelines and a mitigation strategy to be used in the development of autonomous and unmanned vehicle systems are proposed and discussed in the final section.
- Research Article
1
- 10.1007/s00146-025-02367-x
- Apr 24, 2025
- AI & SOCIETY
Successful implementation of autonomous vehicle (AV) technology is not only an engineering challenge but also a social, political, and ethical one. As AVs become commonplace and begin affecting people’s daily lives in a more profound way, media coverage of the social and ethical considerations of these technologies will follow suit. We seek to analyze and categorize the media’s portrayal of the social and ethical issues surrounding AVs to better understand how these issues shape public debate. Our research employs a qualitative thematic analysis of existing public discourse, identifying six areas of interest: (1) social and ethical issues; (2) ethical frameworks; (3) recommendations; (4) tone; (5) type of AV technology; and (6) sensationalism. These themes were derived through a combination of inductive and deductive analysis, informed by existing literature on AV ethics and media framing. Through this approach, we aimed to identify salient considerations in AV technology development and inform future AV systems decisions. Our contributions to the study of public visibility seek to enhance product development and the implementation of AVs, which will largely be steered by public discourse.
- Research Article
4
- 10.3390/info15100645
- Oct 16, 2024
- Information
Autonomous vehicles (AVs) represent a transformative innovation in transportation, promising enhanced safety, efficiency, and sustainability. Despite these promises, achieving robustness, reliability, and adherence to ethical standards in AV systems remains challenging due to the complexity of integrating diverse technologies. This survey reviews literature from 2017 to 2023, analyzing over 90 papers to explore the integration of knowledge graphs (KGs) into AV technologies. Our findings indicate that KGs significantly enhance AV systems by providing structured semantic understanding, improving real-time decision-making, and ensuring compliance with regulatory standards. The paper identifies that while KGs contribute to better environmental perception and contextual reasoning, challenges remain in their seamless integration with existing systems and in maintaining processing speed. We also address the ethical dimensions of AV decision-making, advocating for frameworks that prioritize safety and transparency. This review underscores the potential of KGs to address critical challenges in AV technologies, offering a hopeful and optimistic outlook for the development of robust, reliable, and socially responsible autonomous transportation solutions.
- Research Article
166
- 10.1016/j.jterra.2013.03.004
- Apr 16, 2013
- Journal of Terramechanics
A technical review on navigation systems of agricultural autonomous off-road vehicles
- Research Article
2
- 10.56557/jobari/2025/v31i49405
- Jun 9, 2025
- Journal of Basic and Applied Research International
Multimodal sensor fusion refers to the combination of data from various sensors to produce a more comprehensive and accurate understanding of the environment, enabling autonomous systems to make informed decisions. With the increasing adoption of autonomous vehicles and robotics, the need for robust and reliable sensor fusion techniques has become paramount. These systems must accurately interpret their environment, detect obstacles, and make rapid decisions to ensure safety and efficiency. Despite the numerous advantages, multimodal sensor fusion faces several challenges, including data synchronisation, computational complexity, and real-time processing demands. To address these demands, researchers are developing advanced algorithms and exploring machine learning techniques that optimise data processing. This paper presents a comprehensive review of multimodal sensor fusion techniques for autonomous systems, focusing on the integration of data from visual, acoustic, tactile, inertial, and environmental sensors to enhance decision-making in autonomous vehicles and robotics. By combining data from multiple sensors, multimodal sensor fusion enables autonomous systems to perceive their environment more accurately, improving obstacle detection, lane tracking, motion forecasting, and scene understanding. This review explores various sensor fusion techniques, including data-level, feature-level, and decision-level fusion, and discusses their applications in autonomous vehicles and robotics. Ultimately, this paper aims to contribute to the development of more robust and reliable autonomous systems, ultimately enabling safer and more efficient autonomous vehicles and robots. The paper also addresses challenges and limitations, such as sensor noise and uncertainty, data association, and computational complexity, and highlights future directions, including deep learning-based approaches and multi-agent sensor fusion. Computational complexity poses another challenge in sensor fusion. Integrating data from multiple sensors is computationally intensive, particularly when combining high-resolution inputs from lidar, radar, and visual sensors in real time. Moreover, real-time processing requirements present a limitation for sensor fusion in autonomous systems. Applications like autonomous driving or drone navigation require near-instantaneous processing of multimodal sensor data to make split-second decisions. The study concluded that Multimodal Sensor Fusion technology delivers an incredibly detailed and accurate picture of the environment, which is crucial for safe and efficient navigation.
- Research Article
5
- 10.1016/j.trc.2023.104179
- May 26, 2023
- Transportation Research Part C: Emerging Technologies
Managing a bi-modal bottleneck system with manned and autonomous vehicles: Incorporating the effects of in-vehicle activity utilities
- Conference Article
1
- 10.1109/iccitm53167.2021.9677676
- Aug 25, 2021
Object recognition system is an automobile safety system designed for the safety of the autonomous vehicle and other traffic participants and reduces collision risk. Road accidents have long been a significant issue involving loss of life and property. So recent years have seen rapid developments in autonomous and semi-autonomous vehicles. Autonomous vehicles are a comprehensive solution built for safety and comfort on the roads. This solution has many challenges. One of these challenges is to spot and recognize obstacles while navigating. As humans do, the only way to discover and recognize these obstacles is to see them. Therefore, vision systems are an essential part of this type of vehicle. This paper proposed a vision-based system for autonomous vehicles to recognize objects and traffic lights on the road. The proposed system contains three phases: image pre-processing, feature extraction, and classification. In the first phase, some image pre-processing techniques are applied to prepare and improve the input images, consisting of three stages: convert color images to grayscale, histogram equalization, and image resize. In the second phase, extraction of the features from images using Principal Component Analysis (PCA). In the third phase, the extracted features are fed as input to the proposed One-dimensional Convolutional Neural Network (1D-CNN) model for object classification and recognition. The results show that the proposed CNN model achieved a high recognition rate where the classification precision rate reached 100%, and the error rate is 0%. The low number of false alarms and the high precision rate proves that the proposed system performs very well in recognizing the objects.
- Research Article
32
- 10.1016/j.simpat.2020.102210
- Oct 21, 2020
- Simulation Modelling Practice and Theory
Transitioning to a driverless city: Evaluating a hybrid system for autonomous and non-autonomous vehicles
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.