AbstractRecurrent neural networks (RNN) are ubiquitous computing systems for sequences and multivariate time-series data. While several robust RNN architectures are known, it is unclear how to relate RNN initialization, architecture, and other hyperparameters with accuracy for a given task. In this work, we propose treating RNN as dynamical systems and correlating hyperparameters with accuracy through Lyapunov spectral analysis, a methodology designed explicitly for nonlinear dynamical systems. To address the fact that RNN features go beyond the existing Lyapunov spectral analysis, we propose to infer relevant features from the Lyapunov spectrum with an Autoencoder and an embedding of its Latent representation (AeLLE). Our studies of various RNN architectures show that AeLLE successfully correlates RNN Lyapunov spectrum with accuracy. Furthermore, the Latent representation learned by AeLLE is generalizable to novel inputs from the same task and is formed early in the process of RNN training. The latter property allows for predicting the accuracy to which RNN would converge when training is complete. We conclude that the representation of RNN through the Lyapunov spectrum, along with AeLLE, provides a novel method for the organization and interpretation of variants of RNN architectures.