Abstract

This paper reviews some knowledge representation approaches devoted to the sensor fusion problem, as encountered whenever images, signals, text must be combined to provide the input to a controller or to an inference procedure. The basic steps involved in the derivation of the knowledge representation scheme, are: (A) locate a representation, based on exogeneous context information (B) compare two representations to find out if they refer to the same object/entity (C) merge sensor-based features from the various representations of the same object into a new set of features or attributes, (D) aggregate the representations into a joint fused representation, usually more abstract than each of the sensor-related representations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call