Abstract

The graph Laplacian is a standard tool in data science, machine learning, and image processing. The corresponding matrix inherits the complex structure of the underlying network and is in certain applications densely populated. This makes computations, in particular matrix-vector products, with the graph Laplacian a hard task. A typical application is the computation of a number of its eigenvalues and eigenvectors. Standard methods become infeasible as the number of nodes in the graph is too large. We propose the use of the fast summation based on the nonequispaced fast Fourier transform (NFFT) to perform the dense matrix-vector product with the graph Laplacian fast without ever forming the whole matrix. The enormous flexibility of the NFFT algorithm allows us to embed the accelerated multiplication into Lanczos-based eigenvalues routines or iterative linear system solvers and even consider other than the standard Gaussian kernels. We illustrate the feasibility of our approach on a number of test problems from image segmentation to semi-supervised learning based on graph-based PDEs. In particular, we compare our approach with the Nystr\"om method. Moreover, we present and test an enhanced, hybrid version of the Nystr\"om method, which internally uses the NFFT.

Highlights

  • Graphs are a fundamental tool in the modeling of imaging and data science applications [44, 37, 2, 3, 15]

  • We show that the graph Laplacian provides the ideal environment to be used together with the nonequispaced fast Fourier transform (NFFT)-based fast summation, and we obtain the NFFT-based Lanczos method

  • We have successfully applied the computational power of NFFT-based fast summation to core tools of data science

Read more

Summary

Introduction

Graphs are a fundamental tool in the modeling of imaging and data science applications [44, 37, 2, 3, 15]. When the matrix W has entries (2.2), this arithmetic complexity can be reduced to O(n) using the NFFT-based fast summation [31, 32] This method may be applied when the entries of the matrix W can be written in the form Wji = K(vj − vi), where K : Rd → C is a rotational invariant and smooth kernel function. For small to medium dimensions d, we obtain a fast approximate algorithm for the matrix-vector multiplication Wx of complexity O(n), cf Algorithm 3.1 This algorithm is implemented as applications/fastsum and matlab/fastsum in C and Matlab within the NFFT3 software library, see [18], and we use the default Kaiser-Bessel window function. For multiquadric and inverse multiquadric kernel, adjust parameter c := c/ρ

For the computation of matrix-vector products with the matrix
Alternative eigenvalue algorithm
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.