The integrity and precision of nuclear data are crucial for a broad spectrum of applications, from national security and nuclear reactor design to medical diagnostics, where the associated uncertainties can significantly impact outcomes. A substantial portion of uncertainty in nuclear data originates from the subjective biases in the evaluation process, a crucial phase in the nuclear data production pipeline. Recent advancements indicate that automation of certain routines can mitigate these biases, thereby standardizing the evaluation process and enhancing reproducibility. This research aims to provide a methodology, framework, and metrics for the validation of automated nuclear data evaluation software leveraging high-quality synthetic data that closely mimic real experimental observables. An introduced error metric provides a scale and intuitive measure of the evaluation quality by quantifying the estimate’s accuracy and performance across the specified energy range. Synthetic data provides access to experimental observables and underlying resonance parameters, enabling comparison of different evaluations. The methodology is demonstrated using Ta-181 isotope data in the resolved resonance region. The Automated Resonance Identification Subroutine (ARIS), which operates without prior resonance information, was used to test and showcase the framework’s capabilities utilizing the proposed error metrics. The results demonstrate the effectiveness of the proposed approach and framework for optimizing software parameters and testing hypotheses through “what-if” controlled experiments, such as modifying assumptions about experimental conditions or average resonance parameters.
Read full abstract