Many modern day collaborative systems/system of systems rely heavily on the sharing of information in order to improve performance, manage resources, and maximize overall capability. These types of systems are characterized by many decentralized nodes that can either all be identical or partitioned into a finite set of specialized types. When information is shared within any system of systems, the overall performance hinges on its ability to correctly associate the information received. The primary hypothesis evaluated by any system of systems after the receipt of new information is to determine whether this information belongs to a previously observed entity, or not. When this hypothesis is false, the new information belongs to a new entity, which includes both real and false entities. To evaluate this hypothesis, and to determine the optimal assignments to make at each time step, a data association discriminator or scoring function that performs like a distance function between two probability distributions with common support is defined. This paper defines the properties desired of a data association discriminator, highlights the measures of information that satisfy these properties, and develops the corresponding gating and scoring equations for use during the data association process. The most commonly employed gating and scoring function in the data association literature is the square of the Mahalanobis distance and the log-likelihood score function, which are only defined between two multivariate Gaussian distributions. One of the objectives of this paper is to demonstrate the superior characteristics of the data association discriminators presented herein when used in determining the optimal assignments as compared with the log-likelihood score function. Toward this end, and due to the prevalence of the multivariate Gaussian distribution function in the general entity tracking and information fusion literature, the closed-form equations for the data association discriminators based on the statistical divergences, along with the other measures of information, namely, entropy and mutual information, for multivariate Gaussian distributions will be presented. The architecture upon which a system of systems is developed and designed plays a fundamental role when selecting an appropriate association discriminator. This paper discusses various measures of information commonly used between and within the components of these types of systems and compares and contrasts their behavior in a common framework with a focus on the data association problem. Commonly used measures of information, namely, differential entropy, mutual information, and the statistical divergences, along with the log-likelihood score function are all examined analytically in a common mathematical framework, the advantages and disadvantages of each are discussed, new results are derived and presented, and several numerical examples based on synthetic data are presented that illuminate their behaviors and characteristics. Lastly, it is demonstrated that the Kullback–Leibler discriminator/divergence is not the best choice for use as a data association discriminator.
Read full abstract