Abstract

This paper extends the recently developed methodology for model selection and parameter identification called RL-ABC (Ritto et al., 2022) (reinforced learning and approximate Bayesian computation) to time-varying systems. To tackle slowly-varying systems and detect abrupt changes, new features are proposed. (1) The probability of sampling the worst model has now a lower bound; because it cannot disappear, once it might be useful in the future as the system evolves. (2) A memory term (sliding window) is introduced such that past data can be forgotten whilst updating the reward; which might be useful depending on how fast the system changes. (3) The algorithm detects a change in the system by monitoring the models’ acceptance; a significant drop in acceptance indicates a change. If the system changes the algorithm is reset: new parameter ranges are computed and the rewards are restarted. To test the proposed strategy, new experimental data is obtained from a test rig with non-linear restoring force characteristics. The amplitude of the dynamical experiment is obtained with the control-based continuation strategy varying the excitation amplitude, and three Duffing-like models are used to represent the system. The results are consistent, and the strategy is able to detect changes and update parameter estimation and model predictions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.