Abstract

We demonstrate how machine learning (ML) techniques can be used as an auxiliary tool for nuclear data validation analysis. The output of the ML analysis can inform evaluators and validators on the quality, or lack thereof, of specific nuclear data and benchmark experiments with respect to simulating these experimental benchmark values. To this end, measured and simulated effective neutron multiplication factors, keff, of 875 selected ICSBEP critical assemblies and the latter's sensitivities with respect to nuclear data as well as benchmarks' features (e.g., material nuclide, core geometry) are used as input for a random forest (RF) regression model. The RF is used to encode the complex inter-dependencies between thousands of nuclear data features (i.e., sensitivity profiles and aspects describing the measurements) and 875 simulated and experimental keff values in order to understand what nuclear data features are most informative for predicting bias. The complexity of relationships and high-dimensional space is difficult-to-impossible to search fully using simply expert judgment. As a first proof-of-concept—step, it is demonstrated that this technique is able to correctly trace large discrepancies between simulated and experimental keff back to fabricated shortcomings in nuclear data that were perturbed to simulated keff values. In a second, real-case scenario, step, the RF algorithm is used to validate the ENDF/B-VIII.0 library in comparison to ENDF/B-VII.1 nuclear data. One case is showcased where the chosen ML algorithms highlighted nuclear data (the 19F(n,inl) cross section from 0.4–0.9 MeV) that are shown to be problematic by comparing them to associated differential experimental data and nuclear data from other libraries. In addition to that, it is shown that the RF results point towards poor benchmark experiments and associated underestimated uncertainties (e.g., the PU-SOL-THERM-028 series). However, using the RF algorithm for validating nuclear data with respect to keff is currently limited to pinpointing groups of questionable nuclear data due to the inherent correlations between features introduced by the nuclear data themselves and how keff is simulated. Due to this, we recommend that the ML methods presented be used to augment—rather than replace—the expert knowledge of evaluators and validators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call