Abstract

AbstractThe scope of this contribution is to present some recent results on how interpolation‐based data‐driven methods such as The Loewner framework [1] The AAA algorithm [2] can handle noisy data sets. More precisely, it will be assumed that the input‐output measurements used in these methods, i.e., transfer function evaluations, are corrupted by additive Gaussian noise.The notion of “sensitivity to noise” is introduced and it is used to understand how the location of measurement points affects the “quality” of reduced order models. For example, models that have poles with high sensitivity are hence deemed prohibited since even small perturbations could cause an unwanted behavior (such as instability). Moreover, we show how different data splitting techniques can influence the sensitivity values. This is a crucial step in the Loewner framework; we present some illustrative examples that include the effects of splitting the data in the “wrong” or in the “right” way.Finally, some perspectives for the future: we would like to employ statistics and machine learning techniques in order to avoid “overfitting”. More precisely, it is said that a model that has learned the noise instead of the true signal is considered an “overfit” because it fits the given noisy dataset but has a poor fit with other new datasets. We present some possible ways to avoid “overfitting” for the methods under consideration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call