Abstract

A mathematical model of statistically equivalent block (SEB) classifier has been discussed, the decision regions (blocks) of which being as general as Borel measurable sets in Rn. It has been proved that the condition for the recognition rate of SEB classifier to converge to Bvyes‘ rate in probability is that log n/kn→0, n being the training sample size and kn the number of sample patterns contained in each block. This result is superior to those yet claimed in other papers. In contrast to traditional “two-step” method of classifier analysis, a one-step scheme without density estimation of each class has been developed in this work to strengthen the mathematical proof. A decision tree implementation of SEB classifier with corresponding algorithms was proposed. A simulation experiment on SEB classifier was conducted, which 1) justifies the above theoretical result; 2) suggests small value of kn be preferred in SEB classifier training, which coincides with the theorems given in this paper; 3) shows prospects of SEB classifier in pattern recognition applications. As a nonparametric classifier, SEB classifier is better than the K–NN rules with respect to time and space complexities. In fact, it has been applied to character font recognition in another work of the author.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.