Abstract
Nuclear data, especially fission yields, create uncertainties in the predicted concentrations of fission products in spent fuel. Herein, we present a new framework that extends data assimilation methods to burnup simulations by using data from post-irradiation examination experiments. The adjusted fission yields improve the bias and reduce the uncertainty of predicted fission product concentrations in spent fuel. Our approach modifies fission yields by adjusting the model parameters of the code GEF with post-irradiation examination experiments. We used the BFMC data assimilation method to account for the non-normality of GEF's fission yields. In the application that we present, the assimilation decreased the average bias of the predicted fission product concentrations from 26% to 15%. The average relative standard deviation decreased from 21% to 14%. The GEF fission yields after data assimilation agreed better with those in ENDF/B-VIII.O. For Pu-239 thermal fission, the average relative difference from ENDF/B-VIII.O was 16% before data assimilation and 11% after. For the standard deviations of the fission yields, GEF's were, on average, 16% larger than those from ENDF/B-VIII.O before data assimilation and 15% smaller after.
Highlights
Fission product yields (FYs) and their uncertainties are very important for burn-up [1,2,3], decay heat [4, 5], and nuclear waste management calculations [6]
We begin with fuel sample U1, or the training data used to adjust GEF’s model parameters
After data assimilation (DA), the fission products (FPs) concentrations were recalculated with the posterior FYs
Summary
Fission product yields (FYs) and their uncertainties are very important for burn-up [1,2,3], decay heat [4, 5], and nuclear waste management calculations [6]. Through Monte Carlo sampling of its model parameters, GEF outputs sample sets of FYs that can be used for uncertainty quantification in burnup. Another challenge of the data set was its large degree of inconsistency, or its large prior χ2. To account for this, we used a technique called Marginal Likelihood Optimization (MLO) [22,23,24] It accounts for the discrepancy between experimental and calculated integral data by adding extra uncertainty terms that limit the influence of the data on the adjustment. C(σ) is a vector of simulated PIE data using the FYs created by GEF with the model parameters σ.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.