Abstract

We propose a novel semi-supervised learning (SSL) assumption—the high separation probability (HSP) assumption—that is complementary to the common low density separation (LDS) assumption, and a highly accurate and efficient SSL algorithm, the transductive minimax probability machine (TMPM). Under the HSP assumption, a preferred classifier should ensure that with high probability the samples are far away from the decision boundary. Similarly, the LDS assumption states that the decision boundary should lie in a low-density region. A remarkable property of HSP is that it provides an elegant theoretical framework for analyzing the generalization performance of SSL, while it remains difficult for the LDS assumption to have a quantitative criterion on how it fits the data. Moreover, it has the same paradigm as the minimax probability machine (MPM) and yields an intractable mixed-integer optimization for solving the transductive label assignment problem. Consequently, we propose TMPM with a label-switching strategy and iterative manner to formalize our approximate solution. Theoretically, we analyze the computational complexity, convergence property, and generalization error bound of TMPM. Empirically, we show that TMPM achieves competitive performance on a wide range of datasets, while being almost one order of magnitude faster than existing SSL approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call