Abstract

The minimum number of misclassifications achievable with affine hyperplanes on a given set of labeled points is a key quantity in both statistics and computational learning theory. We compare the modern approaches the regression depth method and the support vector machine with discrimimant analysis. Summarizing, the regression depth method using currently available algorithms yields often better classifications results for small to moderate data sets, say for sample sizes less than 1000 and dimension up to 10, whereas the support vector machine is often more appropriate for larger or higher dimensional data mining problems.KeywordsData miningDiscriminant analysisLogstic regressionOverlapRegression depthSeparationSupport vector machine

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call