In previous approaches, the combination of multiple classifiers depends heavily on one of the three classification results; measurement scores (measurement level), ranking (rank level), and top choice (abstract level). For a more general combination of multiple classifiers, it is desirable that combination methods should be developed at the abstract level. In combining multiple classifiers at this level, most studies have assumed that classifiers behave independently. Such an assumption degrades and biases the classification performance, in cases where highly dependent classifiers are added. In order to overcome such weaknesses, it should be possible to combine multiple classifiers in a probabilistic framework, using a Bayesian formalism. A probabilistic combination of multiple decisions of K classifiers needs a ( K + 1)st-order probability distribution. However, it is well known that such a distribution will become unmanageable to store and estimate, even for a small K. In this paper, a framework is proposed to optimally identify a product set of kth-order dependencies, where 1≤ k⪯- K for the product approximation of the ( K + 1)st-order probability distribution from training samples, and to probabilistically combine multiple decisions by the identified product set, using the Bayesian formalism. This framework was tested and evaluated using a standardized CENPARMI data base. The results showed superior performance over other combination methods.