Abstract

Optimal Bayesian linear classifiers have been studied in the literature for many decades. We demonstrate that all the known results consider only the scenario when the quadratic polynomial has coincident roots. Indeed, we present a complete analysis of the case when the optimal classifier between two normally distributed classes is pairwise and linear. We focus on some special cases of the normal distribution with nonequal covariance matrices. We determine the conditions that the mean vectors and covariance matrices have to satisfy in order to obtain the optimal pairwise linear classifier. As opposed to the state of the art, in all the cases discussed here, the linear classifier is given by a pair of straight lines, which is a particular case of the general equation of second degree. We also provide some empirical results, using synthetic data for the Minsky's paradox case, and demonstrated that the linear classifier achieves very good performance. Finally, we have tested our approach on real life data obtained from the UCI machine learning repository. The empirical results that we obtained show the superiority of our scheme over the traditional Fisher's discriminant classifier.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.