Abstract

We cast a semi-supervised nearest mean classifier, previously introduced by the first author, in a more principled log-likelihood formulation that is subject to constraints. This, in turn, leads us to make the important suggestion to not only investigate error rates of semi-supervised learners but also consider the risk they originally aim to optimize. We demonstrate empirically that in terms of classification error, mixed results are obtained when comparing supervised to semi-supervised nearest mean classification, while in terms of log-likelihood on the test set, the semi-supervised method consistently outperforms its supervised counterpart. Comparisons to self-learning, a standard approach in semi-supervised learning, are included to further clarify the way, in which our constrained nearest mean classifier improves over regular, supervised nearest mean classification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.