Abstract

We propose an algorithm for selection of points from the design domain of small to moderate dimension and for failure probability estimation. The proposed active learning detects failure events and progressively refines the boundary between safe and failure domains thereby improving the failure probability estimation. The method is particularly useful when each evaluation of the performance function g(x) is very expensive and the function can be characterized as either highly nonlinear, noisy, or even discrete-state (e.g., binary). In such cases, only a limited number of calls is feasible, and gradients of g(x) cannot be used. The input design domain is progressively segmented by expanding and adaptively refining a mesh-like lock-free geometrical structure. The proposed triangulation-based approach effectively combines the features of simulation and approximation methods. The algorithm performs two independent tasks: (i) the estimation of probabilities through an ingenious combination of deterministic cubature rules and the application of the divergence theorem and (ii) the sequential extension of the experimental design with new points. The sequential selection of points from the design domain for future evaluation of g(x) is carried out through a new decision approach, which maximizes instantaneous information gain in terms of the probability classification that corresponds to the local region. The extension may be halted at any time, e.g., when sufficiently accurate estimations are obtained. Due to the use of the exact geometric representation in the input domain, the algorithm is most effective for problems of a low dimension, not exceeding eight. The method can handle random vectors with correlated non-Gaussian marginals. When the values of the performance function are valid and credible, the estimation accuracy can be improved by employing a smooth surrogate model based on the evaluated set of points. Finally, we define new factors of global sensitivity to failure based on the entire failure surface weighted by the density of the input random vector.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call