Abstract

Nuclear data are used for a variety of applications, including criticality safety, reactor performance, and material safeguards. Despite the breadth of use-cases, the effective neutron multiplication factor, keff, of ICSBEP critical assemblies are primarily used for nuclear data validation; these are sensitive to specific energy regions and nuclides and are unable to uniquely constrain nuclear data. As a consequence, general-purpose nuclear data libraries, such as ENDF/B-VIII.0 [1], may have deficiencies that, while not apparent in criticality applications, negatively impact other applications, such as non-destructive analysis of special nuclear material [2, 3] and neutron diagnosed subcritical experiments [4]. Recent work by the Experiments Underpinned by Computational Learning for Improvements in Nuclear Data (EUCLID) project developed a machine learning tool, RAFIEKI, which uses random forests and the SHAP metric to determine which nuclear data contribute most to predicted bias between measured and simulated responses (e.g. keff). This paper contrasts RAFIEKI analysis applied to keff only against RAFIEKI analysis with keff paired with either LLNL pulsed sphere measurements or subcritical benchmarks. Two examples show that a) including pulsed sphere measurements substantially increases 9Be nuclear data importance to bias between 2 and 15 MeV, and b) including subcritical benchmarks has the potential for disentangling compensating errors between 240Pu (n,el) and (n,il) cross-sections between 0.1 and 10 MeV. These results show that RAFIEKI analysis applied to response sets that include, but go beyond, keff can aid nuclear data evaluators in identifying issues in nuclear data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call