Abstract

We contend that the main source of uncertainty in model predictions is often related to uncertainties in the conceptual model. While different approaches have been proposed for designing experiments so as to reduce the uncertainty of estimated parameters, much less attention has been devoted to the problem of experiment design for conceptual model discrimination. We propose a criterion for ensuring that a given design will indeed lead to data with sufficient discriminating capacity. This is based on finding, via automatic calibration algorithms, the minimum distance between simulations of the experiment obtained with the alternative models. If the distances thus obtained for all model pairings are large enough, then the proposed experiment can be used for effectively selecting one among those models (provided that ‘reality’ is closely reproduced by one of them). This methodology is applied to a uranium tracer test performed on unaltered granite samples at the Paul Scherrer Institute and which was part of INTRAVAL test case 1b. Using half of the data for calibrating three conceptual models shows that two of them cannot be rejected as valid representations of reality. We use the second half of data to demonstrate that they do not provide sufficient model discrimination capacity, despite the fact that existing criteria would suggest the opposite. We then proceed to propose an additional experiment for resolving the indetermination.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.