Abstract
Multisensor data fusion is the process of combining observations from a number of different sensors to provide a robust and complete description of an environment or process of interest. Data fusion finds wide application in many areas of robotics such as object recognition, environment mapping, and localization. This work has three parts: methods, architectures and applications. Most current data fusion methods employ probabilistic descriptions of observations and processes and use Bayes’ rule to combine this information. Data fusion systems are often complex combinations of sensor devices, processing and fusion algorithms. This work provides an overview of key principles in data fusion architectures from both a hardware and algorithmic viewpoint. The applications of data fusion are pervasive in UAV and underlay the core problem of sensing, estimation and perception. The highlighted is many applications that bring out these features. The first describes a navigation or self-tracking application for an autonomous vehicle. The second describes an application in mapping and environment modeling. The essential algorithmic tools of data fusion are reasonably well established. However, the development and use of these tools in realistic robotics applications is still developing.
Highlights
HUMANS accept input from five sense organs and senses: touch, smell, taste, sound, and sight in different physical formats [1-3]
The human brain combines such data or information without using any automatic aids, because it has a powerful associative reasoning ability, evolved over thousands of years. This is the information technology (IT) age, and in this context multisource multi-sensor information fusion (MUSSIF) encompasses the theory, methods, and tools conceived and used for exploiting synergy in information acquired from multiple sources
There are three main buttons to analyze the simulation of multi-sensor data fusion algorithm
Summary
HUMANS accept input from five sense organs and senses: touch, smell, taste, sound, and sight in different physical formats (and even the sixth sense as mystics tell us) [1-3]. The main objective in sensor DF is to collect measurements and sample observations from various similar or dissimilar sources and sensors, extract the required information, draw inferences, and make decisions These derived or assessed information and deductions can be combined and fused, with intent of obtaining an enhanced status and identity of the perceived or observed object or phenomenon. In a target-tracking application, observations of angular direction, range, and range rate (a basic measurement level fusion of various data) are used for estimating a target’s positions, velocities, and accelerations in one or more axes This is achieved using state-estimation techniques like a Kalman filter. Understanding the direction and speed of the target’s motion may help us to determine the intent of the target, which may require automated reasoning or artificial intelligence using implicit and explicit information For this purpose, knowledge-based methods leading to decision fusion can be used. Some of the foregoing aspects and methods are discussed in the present volume; spread over three parts [11-14]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Sensors and Sensor Networks
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.