Abstract

Graph reservoir computing (GraphRC) gains increasing attention by virtue of its high training efficiency. However, since GraphRC is developed without knowledge of its internal mechanism, it cannot be fully trusted to deploy in practice. Although there are some existing approaches that can be extended to interpret GraphRC, the specific role played by each neuron (i.e., reservoir node) of GraphRC is far less explored. To address this issue, the latent short-term memory property of each reservoir node of GraphRC is qualitatively characterized to unravel its role in predicting the graph signal, thereby enabling an interpretable GraphRC. Specifically, we first deduce the equivalence between the GraphRC and conventional reservoir computing (RC). Then, the underlying memory properties of the GraphRC and its reservoir nodes can be characterized in theory by the multisource reachability among the reservoir nodes in the transformed RC. Moreover, the distinct temporal patterns hidden in reservoir nodes are identified, and then, an attention mechanism based on the identified temporal patterns is deployed in the GraphRC to improve its performance. In addition, the effectiveness of the interpretability for GraphRC and improved GraphRC is verified on the Lorenz-96 spatiotemporal dynamical system. The experimental results of the Lorenz-96 spatiotemporal chaotic system and three real-world traffic datasets demonstrate that the improved GraphRC is superior to original GraphRC and can achieve prediction performance comparable to the state-of-the-art baseline models, but with much less training cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call