Abstract

The model order reduction (MOR) enterprise has shown unprecedented applications in the area of power systems by allowing tractable and realistic simulations of complex power grids. Nearly all model reduction techniques rely on constructing a linear reduced trial subspace followed by a Galerkin projection to restrict the evolution of the latent dynamics onto the reduced subspace. However, power system models often exhibit a slow decay of singular values and, as such, are often approximated poorly due to the truncation of higher-order modes. Furthermore, employing a linear projection for such models places a fundamental limit on the accuracy of the reduced order models (ROMs). This paper uses techniques from machine learning to develop a dimensionality reduction framework for large-scale power grid models to overcome these limitations effectively. In particular, we use autoencoder (AE) network to learn low-dimensional nonlinear trial manifold to avoid the limitation of a linear trial subspace. This is followed by the use of long short-time memory (LSTM) networks to obtain the evolution of the reduced dynamics in a non-intrusive manner. The resulting framework yields ROMs that are compact and are several orders of magnitude more accurate than the linear projection-based methods. We demonstrate the proposed technique on the standard IEEE 118-Bus system, which has 54 generators, and the European high-voltage system with 260 generators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call