Abstract
Abstract Neural networks are widely applied in control applications, yet providing safety guarantees for neural networks is challenging due to their highly nonlinear nature. We provide a comprehensive introduction to the analysis of recurrent neural networks (RNNs) using robust control and dissipativity theory. Specifically, we consider H 2 {\mathcal{H}_{2}} -performance and the ℓ 2 {\ell _{2}} -gain to quantify the robustness of dynamic RNNs with respect to input perturbations. First, we analyze the robustness of RNNs using the proposed robustness certificates and then, we present linear matrix inequality constraints to be used in training of RNNs to enforce robustness. Finally, we illustrate in a numerical example that the proposed approach enhances the robustness of RNNs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have