Abstract

A key challenge for the introduction of any design changes, e.g., advanced fuel concepts, first-of-a-kind nuclear reactor designs, etc., is the cost of the associated experiments, which are required by law to validate the use of computer models for the various stages, starting from conceptual design, to deployment, licensing, operation, and safety. To achieve that, a criterion is needed to decide on whether a given experiment, past or planned, is relevant to the application of interest. This allows the analyst to select the best experiments for the given application leading to the highest measures of confidence for the computer model predictions. The state-of-the-art methods rely on the concept of similarity or representativity, which is a linear Gaussian-based inner-product metric measuring the angle—as weighted by a prior model parameters covariance matrix—between two gradients, one representing the application and the other a single validation experiment. This manuscript emphasizes the concept of experimental relevance which extends the basic similarity index to account for the value accrued from past experiments and the associated experimental uncertainties, both currently missing from the extant similarity methods. Accounting for multiple experiments is key to the overall experimental cost reduction by prescreening for redundant information from multiple equally-relevant experiments as measured by the basic similarity index. Accounting for experimental uncertainties is also important as it allows one to select between two different experimental setups, thus providing for a quantitative basis for sensor selection and optimization. The proposed metric is denoted by ACCRUE, short for Accumulative Correlation Coefficient for Relevance of Uncertainties in Experimental validation. Using a number of criticality experiments for highly enriched fast metal systems and low enriched thermal compound systems with accident tolerant fuel concept, the manuscript will compare the performance of the ACCRUE and basic similarity indices for prioritizing the relevance of a group of experiments to the given application.

Highlights

  • Model validation is one of the key regulatory requirements to develop a scientifically-defendable process in support of establishing confidence in the results of computerized physics models for the various developmental stages starting from conceptual design to deployment, licensing, operation, and safety

  • We present a brief background on three key topics: 1) sensitivity methods employed for the calculation of first-order derivatives; 2) the generalized linear least-squares (GLLS) adjustment theory, employed to calculate the application bias; and 3) the extant similarity index ck definition

  • 5 CONCLUSION AND FURTHER RESEARCH. This manuscript has introduced an extension of the basic similarity metric, denoted by the ACCRUE metric and mathematically symbolized by the jk index to distinguish it from the ck similarity metric

Read more

Summary

INTRODUCTION

Model validation is one of the key regulatory requirements to develop a scientifically-defendable process in support of establishing confidence in the results of computerized physics models for the various developmental stages starting from conceptual design to deployment, licensing, operation, and safety. The resulting similarity index ck is a scalar quantity which lies between −1.0 and 1.0 and may be interpreted as follows: a zero value implies no correlations, i.e., cross-sections with strong sensitivities and high uncertainties, exist between the application, and the experimental conditions This implies that experimental bias cannot be used to infer the application bias, i.e., it cannot be used to improve the prior estimate of the application response and the experiment is judged to have no value to the given application. Addressing these two limitations will help analysts determine the minimum number of experiments required to meet a preset level of increased confidence as well as compare the value of planned experiments, providing a quantitative approach for their optimization In response to these limitations, this manuscript employs the concept of experimental relevance as opposed to similarity in order to distinguish between the possible added value of a new experiment, if any, and the value available from past experiments.

BACKGROUND
Sensitivity Theory
GLLS Adjustment Methodology
ACCRUE INDEX AND VERIFICATION ALGORITHM
Impact of Measurement Uncertainty
Impact of Multiple Experiments
Overall Process
NUMERICAL EXPERIMENTS
Stochastic Non-Intrusive Verification
CONCLUSION AND FURTHER RESEARCH
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call