Abstract

The deep multiple kernel Learning (DMKL) method has attracted wide attention due to its better classification performance than shallow multiple kernel learning. However, the existing DMKL methods are hard to find suitable global model parameters to improve classification accuracy in numerous datasets and do not take into account inter-class correlation and intra-class diversity. In this paper, we present a group-based local adaptive deep multiple kernel learning (GLDMKL) method with lp norm. Our GLDMKL method can divide samples into multiple groups according to the multiple kernel k-means clustering algorithm. The learning process in each well-grouped local space is exactly adaptive deep multiple kernel learning. And our structure is adaptive, so there is no fixed number of layers. The learning model in each group is trained independently, so the number of layers of the learning model maybe different. In each local space, adapting the model by optimizing the SVM model parameter α and the local kernel weight β in turn and changing the proportion of the base kernel of the combined kernel in each layer by the local kernel weight, and the local kernel weight is constrained by the lp norm to avoid the sparsity of basic kernel. The hyperparameters of the kernel are optimized by the grid search method. Experiments on UCI and Caltech 256 datasets demonstrate that the proposed method is more accurate in classification accuracy than other deep multiple kernel learning methods, especially for datasets with relatively complex data.

Highlights

  • Because different kernels have different characteristics and different parameter settings, the performance of the kernels will be very different on different datasets

  • To solve the above problems, this paper proposes a group-based local adaptive deep multiple kernel learning (GLDMKL) method with lp norm

  • The hyperparameters of basic kernels are adjusted by the grid search method

Read more

Summary

Introduction

Because different kernels have different characteristics and different parameter settings, the performance of the kernels will be very different on different datasets. There is no good way to construct or choose a suitable kernel. To solve the problem of these kernels, multiple kernel learning (MKL) method using a combination of kernels has been proposed [1,2,3,4,5,6,7], which makes full use of the characteristics of various kernels, and adapts better to different datasets. These combinations of multiple kernel learning don’t change the kernel structure. How to choose the right basic kernel to combine into a composite kernel is still a major issue.

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call