Abstract

In order to probabilistically combine multiple decisions of K classifiers obtained from samples, a ( K + 1)st-order probability distribution is needed. It is well known that storing and estimating the distribution is exponentially complex and is unmanageable even for small K. Chow and Liu (1968) as well as Lewis (1959) proposed an approximation of nth-order probability distributions with a product of second-order distributions considering first-order tree dependency. However, often we face cases in which a decision is based on more than two other classifiers. In such cases, first-order dependency would not be suitable to estimate a high order distribution properly. In this paper, a new method is proposed to optimally approximate a high order distribution with a product of kth-order dependencies, or ( k + 1)st-order distributions, where 1 ⩽ k ⩽ K. The authors also proposed the way to identify high order dependencies from training samples. The superior performance of the new method is demonstrated by experiments on the recognition of standardized CENPARMI handwritten numerals and KAIST on-line handwritten numerals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call