Abstract

Graph kernels have recently emerged as a promising approach to perform machine learning on graph-structured data. A graph kernel implicitly embedds graphs in a Hilbert space and computes the inner product between these representations. However, the inner product operation greatly limits the representational power of kernels between graphs. In this paper, we propose to perform a series of successive embeddings in order to improve the performance of existing graph kernels and derive more expressive kernels. We first embed the input graphs in a Hilbert space using a graph kernel and then we embed them into another space by employing popular kernels for vector data (e.g., gaussian kernel). Our experiments on several datasets show that by composing kernels, we can achieve significant improvements in classification accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.