Abstract

Abstract This paper reviews some knowledge representation approaches devoted to the sensor fusion problem, as encountered whenever images, signals, text must be combined to provide the input to a controller or to an inference procedure. The basic steps involved in the derivation of the knowledge representation scheme, are: A. locate a representation based on exogeneous context information B. compare two representations to find out if they refer to the same object/entity C. merging sensor based features from the various representations of the same object into a new set of features or attributes D. aggregating the representations into a joint fused representation, usually more abstract than each of the sensor related representations The importance of sensor fusion stems first from the fact that it is generally correct to assume that improvements in control law simplicity and robustness, as well as better classification results, can be achieved by combining diverse information sources. The second element, is that e.g. spatially distributed sensing, or otherwise diverse sensing, does indeed require fusion as well.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.