Abstract
This paper concerns a machine learning approach for the inverse quantification of set-theoretical uncertainty. Inverse uncertainty quantification (e.g., using Bayesian or interval methodologies) is usually obtained following a process where a distance metric between a set of predicted and measured model responses is iteratively minimized. Consequently, the corresponding computational effort is large and usually unpredictable, leading to an intractable situation for real-time applications (e.g., as is commonly encountered in process control problems). To achieve a real-time solution to this inverse problem, machine learning is applied to train a deep neural network, consisting of multilayer auto-encoders and a shallow neural network, by means of a numerically generated data set that captures typical uncertainty in the model parameters. The method is applied to the challenging DLR AIRMOD problem and it is shown that the obtained accuracy is comparable to existing methods in literature, albeit at a fraction of their computational cost.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have