Abstract

A Bayesian reinforcement learning reliability method that combines Bayesian inference for the failure probability estimation and reinforcement learning-guided sequential experimental design is proposed. The reliability-oriented sequential experimental design is framed as a finite-horizon Markov decision process (MDP), with the associated utility function defined by a measure of epistemic uncertainty about Kriging-estimated failure probability, referred to as integrated probability of misclassification (IPM). On this basis, a one-step Bayes optimal learning function termed integrated probability of misclassification reduction (IPMR), along with a compatible convergence criterion, is defined. Three effective strategies are implemented to accelerate IPMR-informed sequential experimental design: (i) Analytical derivation of the inner expectation in IPMR, simplifying it to a single expectation. (ii) Substitution of IPMR with its upper bound IPMRU to avoid element-wise computation of its integrand. (iii) Rational pruning of both quadrature set and candidate pool in IPMRU to alleviate computer memory constraint. The efficacy of the proposed approach is demonstrated on two benchmark examples and two numerical examples. Results indicate that IPMRU facilitates a much more rapid reduction of IPM compared to other existing learning functions, while requiring much less computational time than IPMR itself. Therefore, the proposed reliability method offers a substantial advantage in both computational efficiency and accuracy, especially in complex dynamic reliability problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call