Over the past decades, regularization theory is widely applied in various areas of machine learning to derive a large family of novel algorithms. Traditionally, regularization focuses on smoothing only, and does not fully utilize the underlying discriminative knowledge which is vital for classification. In this paper, we propose a novel regularization algorithm in the least-squares sense, called discriminatively regularized least-squares classification (DRLSC) method, which is specifically designed for classification. Inspired by several new geometrically motivated methods, DRLSC directly embeds the discriminative information as well as the local geometry of the samples into the regularization term so that it can explore as much underlying knowledge inside the samples as possible and aim to maximize the margins between the samples of different classes in each local area. Furthermore, by embedding equality type constraints in the formulation, the solutions of DRLSC can follow from solving a set of linear equations and the framework naturally contains multi-class problems. Experiments on both toy and real world problems demonstrate that DRLSC is often superior in classification performance to the classical regularization algorithms, including regularization networks, support vector machines and some of the recent studied manifold regularization techniques.
Read full abstract