Abstract

The paper is devoted to two problems: (1) reinforcement of SVM algorithms, and (2) justification of memorization mechanisms for generalization.(1) Current SVM algorithm was designed for the case when the risk for the set of nonnegative slack variables is defined by l1 norm. In this paper, along with that classical l1 norm, we consider risks defined by l2 norm and l∞ norm. Using these norms, we formulate several modifications of the existing SVM algorithm and show that the resulting modified SVM algorithms can improve (sometimes significantly) the classification performance.(2) Generalization ability of existing learning algorithms is usually explained by arguments involving uniform convergence of empirical losses to the corresponding expected losses over a given set of functions. However, along with bounds for uniform convergence of empirical losses to the expected losses, the VC theory also provides bounds for relative uniform convergence. These bounds lead to a more accurate estimate of the expected loss. Advanced methods of estimating of expected risk of error have to leverage these bounds, which also support mechanisms of training data memorization, which, as the paper demonstrates, can improve classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call