Abstract

Recurrent neural networks (RNNs) have been successfully applied to a variety of problems involving sequential data, but their optimization is sensitive to parameter initialization, architecture, and optimizer hyperparameters. Considering RNNs as dynamical systems, a natural way to capture stability, i.e., the growth and decay over long iterates, are the Lyapunov Exponents (LEs), which form the Lyapunov spectrum. The LEs have a bearing on stability of RNN training dynamics since forward propagation of information is related to the backward propagation of error gradients. LEs measure the asymptotic rates of expansion and contraction of non-linear system trajectories, and generalize stability analysis to the time-varying attractors structuring the non-autonomous dynamics of data-driven RNNs. As a tool to understand and exploit stability of training dynamics, the Lyapunov spectrum fills an existing gap between prescriptive mathematical approaches of limited scope and computationally-expensive empirical approaches. To leverage this tool, we implement an efficient way to compute LEs for RNNs during training, discuss the aspects specific to standard RNN architectures driven by typical sequential datasets, and show that the Lyapunov spectrum can serve as a robust readout of training stability across hyperparameters. With this exposition-oriented contribution, we hope to draw attention to this under-studied, but theoretically grounded tool for understanding training stability in RNNs.

Highlights

  • The propagation of error gradients in deep learning leads to the study of recursive compositions and their stability [1]

  • We presented an exposition and example application of Lyapunov exponents for understanding training stability in Recurrent neural networks (RNNs)

  • We motivated them as a natural quantity related to stability of dynamics and useful as a complementary approach to existing mathematical approaches for understanding training stability focused on the singular value spectrum

Read more

Summary

Introduction

The propagation of error gradients in deep learning leads to the study of recursive compositions and their stability [1]. Vanishing and exploding gradients arise from long products of Jacobians of the hidden state dynamics whose norm exponentially grows or decays, which potential hinders training [2]. To mitigate this sensitivity, much effort has been made to mathematically understand the link between model parameters and the eigen- and singular-value spectra of these long products [3–5]. We describe the stochastic Lyapunov exponents from the ergodic theory of non-autonomous dynamical systems and outline their connection to the conditions that support gradientbased learning in RNNs. Jacobians are linear maps that evolve state perturbations forward along a trajectory, ut = Jtut−1, e.g., for some initial perturbation h0(ǫ) = h0 + ǫu0 with ǫ ≪ 1 and a given vector u0.

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call