Abstract
ContextModel-based data-interpretation techniques are increasingly used to improve the knowledge of complex system behavior. Physics-based models that are identified using measurement data are generally used for extrapolation to predict system behavior under other actions. In order to obtain accurate and reliable extrapolations, model-parameter identification needs to be robust in terms of variations of systematic modeling uncertainty introduced when modeling complex systems. Approaches such as Bayesian inference are widely used for system identification. More recently, error-domain model falsification (EDMF) has been shown to be useful for situations where little information is available to define the probability density function (PDF) of modeling errors. Model falsification is a discrete population methodology that is particularly suited to knowledge intensive tasks in open worlds, where uncertainty cannot be precisely defined. ObjectiveThis paper compares conventional uses of approaches such as Bayesian inference and EDMF in terms of parameter-identification robustness and extrapolation accuracy. MethodUsing Bayesian inference, three scenarios of conventional assumptions related to inclusion of modeling errors are evaluated for several model classes of a simple beam. These scenarios are compared with results obtained using EDMF. Bayesian model class selection is used to study the benefit of posterior model averaging on the accuracy of extrapolations. Finally, ease of representation and modification of knowledge is illustrated using an example of a full-scale bridge. ResultsThis study shows that EDMF leads to robust identification and more accurate predictions than conventional applications of Bayesian inference in the presence of systematic uncertainty. These results are illustrated with a full-scale bridge. This example shows that the engineering knowledge necessary to perform parameter identification and remaining-fatigue-life predictions of a complex civil structure is easily represented by the EDMF methodology. ConclusionModel classes describing complex systems should include two components: (1) unknown physical parameters that are identified using measurements; (2) conservative modeling error estimations that cannot be represented only as uncertainties related to physical parameters. In order to obtain accurate predictions, both components need to be included in the model-class definition. This study indicates that Bayesian model class selection may lead to over-confidence in certain model classes, resulting in biased extrapolation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.