Abstract

Logistic regression (LR) is widely applied as a powerful classification method in various fields, and a variety of optimization methods have been developed. To cope with large-scale problems, an efficient optimization method for LR is required in terms of computational cost and memory usage. In this paper, we propose an efficient optimization method using non-linear conjugate gradient (CG) descent. In each CG iteration, the proposed method employs the optimized step size without exhaustive line search, which significantly reduces the number of iterations, making the whole optimization process fast. In addition, on the basis of such CG-based optimization scheme, a novel optimization method for kernel logistic regression (KLR) is proposed. Unlike the ordinary KLR methods, the proposed method optimizes the kernel-based classifier, which is naturally formulated as the linear combination of sample kernel functions, directly in the reproducing kernel Hilbert space (RKHS), not the linear coefficients. Subsequently, we also propose the multiple-kernel logistic regression (MKLR) along with the optimization of KLR. The MKLR effectively combines the multiple types of kernels with optimizing the weights for the kernels in the framework of the logistic regression. These proposed methods are all based on CG-based optimization and matrix-matrix computation which is easily parallelized such as by using multi-thread programming. In the experimental results on multi-class classifications using various datasets, the proposed methods exhibit favorable performances in terms of classification accuracies and computation times.

Highlights

  • A classification problem is an intensive research topic in the pattern recognition field

  • The classification problems have been often addressed in the large margin framework [5] as represented by support vector machine (SVM) [6]

  • The above-defined multiple-kernel logistic regression (MKLR) enables us to effectively combine multiple kernels, which can be regarded as multiple kernel learning (MKL)

Read more

Summary

INTRODUCTION

A classification problem is an intensive research topic in the pattern recognition field. The classification problems have been often addressed in the large margin framework [5] as represented by support vector machine (SVM) [6] While those methods are basically formulated for linear classification, they are extended to kernel-based methods by employing kernel functions and produce promising performances. Unlike the ordinary KLR methods, the proposed method optimizes the kernel-based classifier, which is naturally formulated as the linear combination of sample kernel functions as in SVM, diwww.ijacsa.thesai.org. In the proposed formulation, by resorting to the optimization method in the KLR, we optimize the kernel-based classifier in sum of multiple RKHSs and the linear weights for the multiple kernels. This paper contains substantial improvements over the preliminary version [12] in that we develop the kernel-based methods including MKL and give new experimental results

Notations
RELATED WORKS
Newton-Raphson method
Other optimization methods
Kernel logistic regression
Optimum step size α
MULTIPLE KERNEL LOGISTIC REGRESSION
Optimization forf
Optimization for v
PARALLEL COMPUTING
EXPERIMENTAL RESULTS
Linear classification
Kernel-based classification
Multiple-kernel learning
VIII. CONCLUDING REMARKS
OBJECTIVE
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call