Abstract

An accuracy of multiclass classifying the collections of objects taken from a given ensemble of data sources is investigated using the average mutual information between the datasets of the sources and a set of the classes. We consider two fusion schemes, namely WMV (Weighted Majority Vote) scheme based on a composition of decisions on the objects of the individual sources and GDM (General Dissimilarity Measure) scheme which uses a composition of metrics in datasets of the sources. For a given metric classification model, it is proved that the weighted mean value of the average mutual information per one source in WMV scheme is smaller to the similar mean in GDM scheme. Using a lower bound to the appropriate rate distortion function, it is shown that the lower bounded error probability in WMV scheme exceeds the similar error probability in GDM scheme. This theoretical result is confirmed by a computing experiment on face recognition of HSI color images giving the ensemble of H, S, and I sources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call