Abstract

Classifying the training data correctly without over-fitting is one of the goals in machine learning. In this paper, we propose a general Generalization Memorization Machine (GMM) to obtain zero empirical risk with better generalization. The widely applied loss-based learning models can be extended by the GMM to improve their memorization and generalization abilities. Specifically, we propose two new models based on the GMM, called Hard Generalization Memorization Machine (HGMM) and Soft Generalization Memorization Machine (SGMM). Both HGMM and SGMM obtain zero empirical risks with well generalization, and the SGMM further improves the capacity and applicability of HGMM. The optimization problems in the proposed models are quadratic programming problems and could be solved efficiently. Additionally, the recently proposed generalization memorization kernel and the corresponding support vector machine are the special cases of our SGMM. Experimental results demonstrate the effectiveness of the proposed HGMM and SGMM both on memorization and generalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call