Abstract

This chapter discusses the role that the relationship between the number of measurements and the number of training patterns plays at various stages in the design of a pattern recognition system. The designer of a pattern recognition system should make every possible effort to obtain as many samples as possible. As the number of samples increases, not only does the designer have more confidence in the performance of the classifier, but also more measurements can be incorporated in the design of the classifier without the fear of peaking in its performance. However, there are many pattern classification problems where either the number of samples is limited or obtaining a large number of samples is extremely expensive. If the designer chooses to take the optimal Bayesian approach, the average performance of the classifier improves monotonically as the number of measurements is increased. Most practical pattern recognition systems employ a non-Bayesian decision rule because the use of optimal Bayesian approach requires knowledge of prior densities, and besides, their complexity precludes the development of real-time recognition systems. The peaking behavior of practical classifiers is caused principally by their nonoptimal use of measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call