Abstract

• The monotone Conjunctive Polynomial feature space is here presented. • Homogeneous polynomial kernels can be decomposed as a combination of monotone conjunctive kernels. • The optimal monotone Conjunctive Polynomial kernel can be learned via Multiple Kernel Learning . • The proposed approach proved to be robust to overfitting. • The monotone Conjunctive Polynomial kernel has a deep structure. Dot-product kernels is a large family of kernel functions based on dot-product between examples. A recent result states that any dot-product kernel can be decomposed as a non-negative linear combination of homogeneous polynomial kernels of different degrees, and it is possible to learn the coefficients of the combination by exploiting the Multiple Kernel Learning (MKL) paradigm. In this paper it is proved that, under mild conditions, any homogeneous polynomial kernel defined on binary valued data can be decomposed in a parametrized finite linear non-negative combination of monotone conjunctive kernels. MKL has been employed to learn the parameters of the combination. Furthermore, we show that our solution produces a deep kernel whose feature space consists of hierarchically organized features of increasing complexity. We also emphasize the connection between our solution and existing deep kernel learning frameworks. A wide empirical assessment is presented to evaluate the proposed framework, and to compare it against the baselines on several categorical and binary datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call