Abstract

In the context of digital twins and integration of physics-based models with machine learning tools, this paper proposes a new methodology for model selection and parameter identification. It combines (i) reinforcement learning (RL) for model selection through a Thompson-like sampling with (ii) approximate Bayesian computation (ABC) for parameter identification and uncertainty quantification. These two methods are applied together to a nonlinear mechanical oscillator with periodic forcing. Experimental data are used in the analysis and two different nonlinear models are tested. The initial Beta distribution that represents the likelihood of the model is updated depending on how successful the model is at reproducing the reference data (reinforcement learning strategy). At the same time, the prior distribution of the model parameters is updated using a likelihood-free strategy (ABC). In the end, the rewards and the posterior distribution of the parameters of each model are obtained. The results show that the combined methodology (RL-ABC) is promising for model selection from bifurcation diagrams. Prior parameter distribution was successfully updated, correlations between parameters were found, probabilistic envelopes of the posterior model are consistent with the available data, the most rewarded model was selected, and the reinforcing strategy allows to speed up the selection process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call