Abstract

The recently developed memristor technology allows for extremely fast implementation of a number of important matrix operations and algorithms. Moreover, the existence of fast matrix-vector operations offers the opportunity to design new matrix algorithms that exploit these operations. Here, we focus on the spectral decomposition of matrices, a task that plays an important role in a wide variety of applications from different engineering and scientific fields, including network science, control theory, advanced dynamics, and quantum mechanics. While there are a number of algorithms designed to find eigenvalues and eigenvectors of a matrix, these methods often suffer from poor running time performance. In this work, we present an algorithm for finding eigenvalues and eigenvectors that is designed to be used on memristor crossbar arrays. Although this algorithm can be implemented in a non-memristive system, its fast running time relies on the availability of extremely fast matrix-vector multiplication, as is offered by a memristor crossbar array. In this paper, we (1) show the running time improvements of existing eigendecomposition algorithms when matrix-vector multiplications are performed on a memristor crossbar array, and (2) present <monospace>EigSweep</monospace>, a novel, fully-parallel, fast and flexible eigendecomposition algorithm that gives an improvement in running time over traditional eigendecomposition algorithms when all are accelerated by a memristor crossbar. We discuss algorithmic aspects as well as hardware-related aspects of the implementation of <monospace>EigSweep</monospace>, and perform an extensive experimental analysis on real-world and synthetic matrices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call