Abstract

Abstract This paper reviews some knowledge representation approaches devoted to the sensor fusion problem, as encountered whenever images, signals, text must be combined to provide the input to a controller or to an inference procedure. The basic steps involved in the derivation of the knowledge representation scheme, are: A. locate a representation based on exogeneous context information B. compare two representations to find out if they refer to the same object/entity C. merging sensor based features from the various representations of the same object into a new set of features or attributes D. aggregating the representations into a joint fused representation, usually more abstract than each of the sensor related representations The importance of sensor fusion stems first from the fact that it is generally correct to assume that improvements in control law simplicity and robustness, as well as better classification results, can be achieved by combining diverse information sources. The second element, is that e.g. spatially distributed sensing, or otherwise diverse sensing, does indeed require fusion as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call