Abstract

If knowledge such as classification rules are extracted from sample data in a distributed way, it may be necessary to combine or fuse these rules. In a conventional approach this would typically be done either by combining the classifiers' outputs (e.g., in form of a classifier ensemble) or by combining the sets of classification rules (e.g., by weighting them individually). In this paper, we introduce a new way of fusing classifiers at the level of parameters of classification rules. This technique is based on the use of probabilistic generative classifiers using multinomial distributions for categorical input dimensions and multivariate normal distributions for the continuous ones. That means, we have distributions such as Dirichlet or normal-Wishart distributions over parameters of the classifier. We refer to these distributions as hyperdistributions or second-order distributions. We show that fusing two (or more) classifiers can be done by multiplying the hyperdistributions of the parameters and derive simple formulas for that task. Properties of this new approach are demonstrated with a few experiments. The main advantage of this fusion approach is that the hyperdistributions are retained throughout the fusion process. Thus, the fused components may, for example, be used in subsequent training steps (online training).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.