Abstract

Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the natC(n,el) reaction at 2.0MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56Fe(n,el) reaction at 14.1MeV, the deviation increases to 11.0%. We show how this discrepancy propagates through to varying levels of simulation complexity. Simulations were run with Turnip-MC and the ENDF-B/VII.1 library in an effort to define a new systematic error for this range of applications. Alternative representations of differential and double differential distributions are explored in addition to their impact on computational efficiency and relevant simulation results.

Highlights

  • The Monte Carlo method has been used for many years to simulate the transport of uncharged and charged radiation throughout fission reactors and other nuclear facilities, and as such, there are Please cite this article in press as: T

  • This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission

  • This study considers how the processing and neutronics codes handle nuclear data within the current modelling methods; in addition to exploring alternatives that could be implemented within future code developments for fusion neutronics

Read more

Summary

Introduction

The Monte Carlo method has been used for many years to simulate the transport of uncharged and charged radiation throughout fission reactors and other nuclear facilities, and as such, there are. The original method for processing the differential and double differential cross section distributions is to represent them with 32 equal probability channels This particular format is specific to the ACE files used in MCNP, though many other codes use an equal probability structure in a similar fashion with differing numbers of channels. In the fission specific case, the function is more isotropic in scattering cosine than the fusion specific case, in the sense that there is little or no forward bias and no major features This is reinforced when one considers the complexity of the distributions, the natC distribution is described by a 4th order polynomial, but the 56Fe distribution requires a 12th order polynomial as determined by the evaluators and nuclear models. Distributed over the range in and will be shown to better represent the function

Approach
Statistical analysis with transport using Turnip-MC
Point-wise results
Statistical results
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call