HetMS-MC: A framework for heterogeneous multiscale Monte Carlo modelling in radiation medicine.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

HetMS-MC: A framework for heterogeneous multiscale Monte Carlo modelling in radiation medicine.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1088/1361-6560/acf183
Haralick texture analysis for microdosimetry: characterization of Monte Carlo generated 3D specific energy distributions
  • Sep 15, 2023
  • Physics in Medicine & Biology
  • Iymad R Mansour + 1 more

Objective. Explore the application of Haralick textural analysis to 3D distributions of specific energy (energy imparted per unit mass) scored in cell-scale targets considering varying mean specific energy (absorbed dose), target volume, and incident spectrum. Approach. Monte Carlo simulations are used to generate specific energy distributions in cell-scale water voxels ((1 μm)3–(15 μm)3) irradiated by photon sources (mean energies: 0.02–2 MeV) to varying mean specific energies (10–400 mGy). Five Haralick features (homogeneity, contrast, entropy, correlation, local homogeneity) are calculated using an implementation of Haralick analysis designed to reduce sensitivity to grey level quantization and are interpreted using fundamental radiation physics. Main results. Haralick measures quantify differences in 3D specific energy distributions observed with varying voxel volume, absorbed dose magnitude, and source spectrum. For example, specific energy distributions in small (1–3 μm) voxels with low magnitudes of absorbed dose (10 mGy) have relatively high measures of homogeneity and local homogeneity and relatively low measures of contrast and entropy (all relative to measures for larger voxels), reflecting the many voxels with zero specific energy in an otherwise sporadic distribution. With increasing target size, energy is shared across more target voxels, and trends in Haralick measures, such as decreasing homogeneity and increasing contrast and entropy, reflect characteristics of each 3D specific energy distribution. Specific energy distributions for sources of differing mean energy are characterized by Haralick measures, e.g. contrast generally decreases with increasing source energy, correlation and homogeneity are often (not always) higher for higher energy sources. Significance. Haralick texture analysis successfully quantifies spatial trends in 3D specific energy distributions characteristic of radiation source, target size, and absorbed dose magnitude, thus offering new avenues to quantify microdosimetric data beyond first order histogram features. Promising future directions include investigations of multiscale tissue models, targeted radiation therapy techniques, and biological response to radiation.

  • Abstract
  • 10.1016/j.ejmp.2017.09.018
Abstract ID: 38 Investigating energy deposition in cellular targets using multiscale tissue models
  • Oct 1, 2017
  • Physica Medica
  • Patricia Oliver + 1 more

ID: 38 Investigating energy deposition in cellular targets using multiscale tissue models

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.1155/2013/697057
Uncertainty Analysis in Reactor Physics Modeling
  • Jan 1, 2013
  • Science and Technology of Nuclear Installations
  • Kostadin Ivanov + 2 more

In recent years, there has been an increasing demand from nuclear research, industry, safety, and regulation for best estimate predictions to be provided with their confidence bounds. Consequently, Organization for Economic Cooperation and Development (OECD)/Nuclear Energy Agency (NEA) has initiated an international uncertainty analysis in modeling (UAM) benchmark focused on uncertainty analysis in best-estimate coupled code calculations for design, operation, and safety analysis of light water reactors (LWRs). The title of this benchmark is “OECD/NEA UAM-LWR benchmark”. Reference systems and scenarios for coupled code analysis are defined to study the uncertainty effects for all stages of the system calculations. Measured data from plant operation are available for the chosen scenarios. The proposed technical approach is to establish a benchmark for uncertainty analysis in best-estimate modeling and coupled multiphysics and multiscale LWR analysis, using as bases a series of well-defined problems with complete sets of input specifications and reference experimental data. The objective is to determine the uncertainty in LWR system calculations at all stages of a coupled reactor physics/thermal hydraulics calculation. The full chain of uncertainty propagation from basic data, engineering uncertainties, across different scales (multi-scale), and physics phenomena (multiphysics) is tested on a number of benchmark exercises for which experimental data are available and for which the power plant details have been released. The principal idea is (a) to subdivide the complex system/scenario into several steps or exercises, each of which can contribute to the total uncertainty of the final coupled system calculation, (b) to identify input, output, and assumptions for each step, (c) to calculate the resulting uncertainty in each step and (d) to propagate the uncertainties in an integral system simulation for which high quality plant experimental data exist for the total assessment of the overall computer code uncertainty. The main scope covers uncertainty (and sensitivity) analysis (SA/UA) in best estimate modeling for design and operation of LWRs, including methods that are used for safety evaluations. As part of this effort, the development and assessment of different methods or techniques to account for the uncertainties in the calculations are to be investigated and reported to the participants. The general frame of the OECD/NEAUAM-LWR benchmark consists of three phases with different exercises for each phase: Phase I (neutronics phase), Phase II (core phase), and Phase III (system phase). The focus of Phase I is on propagating uncertainties in standalone neutronics calculations and consists of the following three exercises.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.25165/j.ijabe.20201304.5327
Experimental study on specific grinding energy and particle size distribution of maize grain, stover and cob
  • Jan 1, 2020
  • International Journal of Agricultural and Biological Engineering
  • Jun Fu + 3 more

Reducing the particle size of biomass is of great significance for rational and efficient utilization of biomass. In this study, maize grain, stover, and cob were comminuted at different speeds (2000-2800 r/min) by hammer mill with a mesh size of 2.8 mm. The mechanical energy for smashing three selected samples was obtained directly through the sensor and data testing system. Experimental results demonstrated that the maize cob had the highest total specific energy while the maize grain had the lowest (135.83-181.10 kW·h/t and 27.08-36.23 kW·h/t, respectively). In addition, for the same material, higher hammer mill speed generated more specific energy consumption. And the effective specific energy of maize stover had a similar trend to the total specific one. However, the effective specific grinding energy of maize cob and grain increased initially and then decreased with the increase of rotating speed. The fitting curves of the specific energy to mill speeds were determined, and the range of determination coefficients of the regression equation was 0.933-0.996. Particle size distribution curves were drawn by sieving the pulverized particles of the three samples based on a series of standard sieves. Fourteen relevant parameters characterizing the particle size distribution were calculated according to the screening data. Calculation results demonstrated that larger rotational speed leads to smaller particle sizes. Combining the size parameters, distribution parameters, and shape parameters, it was found that the distributions of the three samples all exhibit a distribution of “well-graded fine-skewed mesokurtic”. The Rosin-Rammler function was considered to be suitable for characterizing the particle size distribution of maize grain, stover, and cob particles with a coefficient of determination between 0.930 and 0.992. Keywords: maize grain, maize stover, maize cob, specific energy, particle size distribution, comminution DOI: 10.25165/j.ijabe.20201304.5327 Citation: Xue Z, Fu J, Chen Z, Ren L Q. Experimental study on specific grinding energy and particle size distribution of maize grain, stover and cob. Int J Agric & Biol Eng, 2020; 13(4): 135–142.

  • Research Article
  • 10.25165/ijabe.v13i4.5327
Experimental study on specific grinding energy and particle size distribution of maize grain, stover and cob
  • Aug 7, 2020
  • International Journal of Agricultural and Biological Engineering
  • Jun Fu + 3 more

Reducing the particle size of biomass is of great significance for rational and efficient utilization of biomass. In this study, maize grain, stover, and cob were comminuted at different speeds (2000-2800 r/min) by hammer mill with a mesh size of 2.8 mm. The mechanical energy for smashing three selected samples was obtained directly through the sensor and data testing system. Experimental results demonstrated that the maize cob had the highest total specific energy while the maize grain had the lowest (135.83-181.10 kW·h/t and 27.08-36.23 kW·h/t, respectively). In addition, for the same material, higher hammer mill speed generated more specific energy consumption. And the effective specific energy of maize stover had a similar trend to the total specific one. However, the effective specific grinding energy of maize cob and grain increased initially and then decreased with the increase of rotating speed. The fitting curves of the specific energy to mill speeds were determined, and the range of determination coefficients of the regression equation was 0.933-0.996. Particle size distribution curves were drawn by sieving the pulverized particles of the three samples based on a series of standard sieves. Fourteen relevant parameters characterizing the particle size distribution were calculated according to the screening data. Calculation results demonstrated that larger rotational speed leads to smaller particle sizes. Combining the size parameters, distribution parameters, and shape parameters, it was found that the distributions of the three samples all exhibit a distribution of “well-graded fine-skewed mesokurtic”. The Rosin-Rammler function was considered to be suitable for characterizing the particle size distribution of maize grain, stover, and cob particles with a coefficient of determination between 0.930 and 0.992. Keywords: maize grain, maize stover, maize cob, specific energy, particle size distribution, comminution DOI: 10.25165/j.ijabe.20201304.5327 Citation: Xue Z, Fu J, Chen Z, Ren L Q. Experimental study on specific grinding energy and particle size distribution of maize grain, stover and cob. Int J Agric & Biol Eng, 2020; 13(4): 135–142.

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-90-481-3411-3_6
Sensitivity and Uncertainty Analysis of Models and Data
  • Dec 24, 2009
  • Dan Gabriel Cacuci

This chapter highlights the characteristic features of statistical and deterministic methods currently used for sensitivity and uncertainty analysis of measurements and computational models. The symbiotic linchpin between the objectives of uncertainty analysis and those of sensitivity analysis is provided by the “propagation of errors” equations, which combine parameter uncertainties with the sensitivities of responses (i.e., results of measurements and/or computations) to these parameters. It is noted that all statistical uncertainty and sensitivity analysis methods first commence with the “uncertainty analysis” stage, and only subsequently proceed to the “sensitivity analysis” stage. This procedural path is the reverse of the procedural (and conceptual) path underlying the deterministic methods of sensitivity and uncertainty analysis, where the sensitivities are determined prior to using them for uncertainty analysis. In particular, it is emphasized that the Adjoint Sensitivity Analysis Procedure (ASAP) is the most efficient method for computing exactly the local sensitivities for large-scale nonlinear problems comprising many parameters. This efficiency is underscored with illustrative examples. The computational resources required by the most popular statistical and deterministic methods are discussed comparatively. A brief discussion of unsolved fundamental problems, open for future research, concludes this chapter.

  • Research Article
  • Cite Count Icon 18
  • 10.1016/j.euromechsol.2017.02.008
Uncertainty analysis in multiscale modeling of concrete based on continuum micromechanics
  • Mar 7, 2017
  • European Journal of Mechanics - A/Solids
  • Luise Göbel + 2 more

Uncertainty analysis in multiscale modeling of concrete based on continuum micromechanics

  • Research Article
  • Cite Count Icon 4
  • 10.1002/mp.15609
Monte Carlo simulations of nanodosimetry and radiolytic species production for monoenergetic proton and electron beams: Benchmarking of GEANT4-DNA and LPCHEM codes.
  • Apr 1, 2022
  • Medical Physics
  • Yasmine Ali + 7 more

In hadrontherapy, biophysical models can be used to predict the biological effect received by cancerous tissues and organs at risk. The input data of these models generally consist of information on nano/micro dosimetric quantities and, concerning some models, reactive species produced in water radiolysis. In order to fully account for the radiation stochastic effects, these input data have to be provided by Monte Carlo track structure (MCTS) codes allowing to estimate physical, physico-chemical, and chemical effects of radiation at the molecular scale. The objective of this study is to benchmark two MCTS codes, Geant4-DNA and LPCHEM, that are useful codes for estimating the biological effects of ions during radiation therapy treatments. In this study we considered the simulation of specific energy spectra for monoenergetic proton beams (10 MeV) as well as radiolysis species production for both electron (1 MeV) and proton (10 MeV) beams with Geant4-DNA and LPCHEM codes. Options 2, 4, and 6 of the Geant4-DNA physics lists have been benchmarked against LPCHEM. We compared probability distributions of energy transfer points in cylindrical nanometric targets (10nm) positioned in a liquid water box. Then, radiochemical species (· OH, , , H2 , and yields simulated between 10-12 and 10-6 s after irradiation are compared. Overall, the specific energy spectra and the chemical yields obtained by the two codes are in good agreement considering the uncertainties on experimental data used to calibrate the parameters of the MCTS codes. For 10 MeV proton beams, ionization and excitation processes are the major contributors to the specific energy deposition (larger than 90%) while attachment, solvation, and vibration processes are minor contributors. LPCHEM simulates tracks with slightly more concentrated energy depositions than Geant4-DNA which translates into slightly faster recombination than Geant4-DNA. Relative deviations (CEV ) with respect to the average of evolution rates of the radical yields between 10-12 and 10-6 s remain below 10%. When comparing execution times between the codes, we showed that LPCHEM is faster than Geant4-DNA by a factor of about four for 1000 primary particles in all simulation stages (physical, physico-chemical, and chemical). In multi-thread mode (four threads), Geant4-DNA computing times are reduced but remain slower than LPCHEM by ∼20% up to ∼50%. For the first time, the entire physical, physico-chemical, and chemical models of two track structure Monte Carlo codes have been benchmarked along with an extensive analysis on the effects on the water radiolysis simulation. This study opens up new perspectives in using specific energy distributions and radiolytic species yields from monoenergetic ions in biophysical models integrated to Monte Carlo software.

  • Research Article
  • 10.1093/rpd/ncl466
Microdosimetric analysis for high LET radiation
  • Dec 1, 2006
  • Radiation Protection Dosimetry
  • X.-Q Lu + 1 more

For short range high linear energy transfer (LET) radiation therapy the biological effects are strongly affected by the heterogeneity of the specific energy (z) distribution delivered to tumour cells. Three-dimensional (3-D) dosimetry information at the cellular level is required for this study. An ideal approach would be the reconstruction of the cell and the radiation source microdistribution from sequential autoradiographic sections, which is, however, not a practical solution. In this paper, a novel microdosimetry analysis method, which obtains the specific energy (z) distribution directly from the morphological information in individual autoradiographic sections, is applied to human glioblastoma multifore (GBM) and normal brain tissue specimens in boron neutron capture therapy. The results are consistent with Monte Carlo simulation and demonstrate a uniform radiation source distribution in both GBM and normal brain tissues. We also hypothesise a biophysical model based on specific energy for survival analysis. The specific energy distributions to cell nuclei were calculated with a uniform radiation source distribution. By combining this microdosimetric analysis with measured cell survival data at the low dose region, a cell survival curve at high doses is predicted, which is consistent with the commonly used simple exponential curve model for high LET radiation.

  • Research Article
  • Cite Count Icon 18
  • 10.1088/1361-6560/aacf7b
Investigating energy deposition within cell populations using Monte Carlo simulations
  • Jul 31, 2018
  • Physics in Medicine & Biology
  • P A K Oliver + 1 more

In this work, we develop multicellular models of healthy and cancerous human soft tissues, which are used to investigate energy deposition in subcellular targets, quantify the microdosimetric spread in a population of cells, and determine how these results depend on model details. Monte Carlo (MC) tissue models combining varying levels of detail on different length scales are developed: microscopically-detailed regions of interest (>1500 explicitly-modelled cells) are embedded in bulk tissue phantoms irradiated by photons (20 keV–1.25 MeV). Specific energy (z; energy imparted per unit mass) is scored in nuclei and cytoplasm compartments using the EGSnrc user-code egs_chamber; specific energy mean, , standard deviation, , and distribution, , are calculated for a variety of macroscopic doses, D. MC-calculated are compared with normal distributions having the same mean and standard deviation. For ∼mGy doses, there is considerable variation in energy deposition (microdosimetric spread) throughout a cell population: e.g. for 30 keV photons irradiating melanoma with 7.5 μm cell radius and 3 μm nuclear radius, for nuclear targets is , and the fraction of nuclei receiving no energy deposition, fz=0, is 0.31 for a dose of 10 mGy. If cobalt-60 photons are considered instead, then decreases to , and fz=0 decreases to 0.036. These results correspond to randomly arranged cells with cell/nucleus sizes randomly sampled from a normal distribution with a standard deviation of 1 μm. If cells are arranged in a hexagonal lattice and cell/nucleus sizes are uniform throughout the population, then decreases to and for 30 keV and cobalt-60, respectively; fz=0 decreases to 0.25 and 0.000 94 for 30 keV and cobalt-60, respectively. Thus, specific energy distributions are sensitive to cell/nucleus sizes and their distributions: variations in specific energy deposited over a cell population are underestimated if targets are assumed to be uniform in size compared with more realistic variation in target size. Bulk tissue dose differs from for nuclei (cytoplasms) by up to () across all cell/nucleus sizes, bulk tissues, and incident photon energies, considering a 50 mGy dose level. Overall, results demonstrate the importance of microdosimetric considerations at low doses, and indicate the sensitivity of energy deposition within subcellular targets to incident photon energy, dose level, elemental compositions, and microscopic tissue model.

  • Research Article
  • Cite Count Icon 1
  • 10.1002/mp.16912
Multiscale Monte Carlo simulations for dosimetry in x-ray breast imaging: Part II - Microscopic scales.
  • Dec 26, 2023
  • Medical Physics
  • Rodrigo T Massera + 2 more

Although the benefits of breast screening and early diagnosis are known for reducing breast cancer mortality rates, the effects and risks of low radiation doses to the cells in the breast are still ongoing topics ofstudy. To study specific energy distributions ( ) in cytoplasm and nuclei of cells corresponding to glandular tissue for different x-ray breast imagingmodalities. A cubic lattice (500 μm length side) containing 4064 spherical cells was irradiated with photons loaded from phase space files with varying glandular voxel doses ( ). Specific energy distributions were scored for nucleus and cytoplasm compartments using the PENELOPE (v. 2018) + penEasy (v. 2020) Monte Carlo (MC) code. The phase space files, generated in part I of this work, were obtained from MC simulations in a voxelized anthropomorphic phantom corresponding to glandular voxels for different breast imaging modalities, including digital mammography (DM), digital breast tomosynthesis (DBT), contrast enhanced digital mammography (CEDM) and breast CT (BCT). In general, the average specific energy in nuclei is higher than the respective glandular dose scored in the same region, by up to 10%. The specific energy distributions for nucleus and cytoplasm are directly related to the magnitude of the glandular dose in the voxel ( ), with little dependence on the spatial location. For similar values, for nuclei is different between DM/DBT and CEDM/BCT, indicating that distinct x-ray spectra play significant roles in . In addition, this behavior is also present when the specific energy distribution ( ) is considered taking into account the GDD in thebreast. Microdosimetry studies are complementary to the traditional macroscopic breast dosimetry based on the mean glandular dose (MGD). For the same MGD, the specific energy distribution in glandular tissue varies between breast imaging modalities, indicating that this effect could be considered for studying the risks of exposing the breast to ionizingradiation.

  • Dissertation
  • 10.25643/bauhaus-universitaet.2555
Stochastic uncertainty quantification for multiscale modeling of polymeric nanocomposites
  • Jan 1, 2015
  • Bac Nam Vu

Nanostructured materials are extensively applied in many fields of material science for new industrial applications, particularly in the automotive, aerospace industry due to their exceptional physical and mechanical properties. Experimental testing of nanomaterials is expensive, timeconsuming,challenging and sometimes unfeasible. Therefore,computational simulations have been employed as alternative method to predict macroscopic material properties. The behavior of polymeric nanocomposites (PNCs) are highly complex. The origins of macroscopic material properties reside in the properties and interactions taking place on finer scales. It is therefore essential to use multiscale modeling strategy to properly account for all large length and time scales associated with these material systems, which across many orders of magnitude. Numerous multiscale models of PNCs have been established, however, most of them connect only two scales. There are a few multiscale models for PNCs bridging four length scales (nano-, micro-, meso- and macro-scales). In addition, nanomaterials are stochastic in nature and the prediction of macroscopic mechanical properties are influenced by many factors such as fine-scale features. The predicted mechanical properties obtained by traditional approaches significantly deviate from the measured values in experiments due to neglecting uncertainty of material features. This discrepancy is indicated that the effective macroscopic properties of materials are highly sensitive to various sources of uncertainty, such as loading and boundary conditions and material characteristics, etc., while very few stochastic multiscale models for PNCs have been developed. Therefore, it is essential to construct PNC models within the framework of stochastic modeling and quantify the stochastic effect of the input parameters on the macroscopic mechanical properties of those materials. This study aims to develop computational models at four length scales (nano-, micro-, meso- and macro-scales) and hierarchical upscaling approaches bridging length scales from nano- to macro-scales. A framework for uncertainty quantification (UQ) applied to predict the mechanical properties of the PNCs in dependence of material features at different scales is studied. Sensitivity and uncertainty analysis are of great helps in quantifying the effect of input parameters, considering both main and interaction effects, on the mechanical properties of the PNCs. To achieve this major goal, the following tasks are carried out: At nano-scale, molecular dynamics (MD) were used to investigate deformation mechanism of glassy amorphous polyethylene (PE) in dependence of temperature and strain rate. Steered molecular dynamics (SMD)were also employed to investigate interfacial characteristic of the PNCs. At mico-scale, we developed an atomistic-based continuum model represented by a representative volume element (RVE) in which the SWNT’s properties and the SWNT/polymer interphase are modeled at nano-scale, the surrounding polymer matrix is modeled by solid elements. Then, a two-parameter model was employed at meso-scale. A hierarchical multiscale approach has been developed to obtain the structure-property relations at one length scale and transfer the effect to the higher length scales. In particular, we homogenized the RVE into an equivalent fiber. The equivalent fiber was then employed in a micromechanical analysis (i.e. Mori-Tanaka model) to predict the effective macroscopic properties of the PNC. Furthermore, an averaging homogenization process was also used to obtain the effective stiffness of the PCN at meso-scale. Stochastic modeling and uncertainty quantification consist of the following ingredients: - Simple random sampling, Latin hypercube sampling, Sobol’ quasirandom sequences, Iman and Conover’s method (inducing correlation in Latin hypercube sampling) are employed to generate independent and dependent sample data, respectively. - Surrogate models, such as polynomial regression, moving least squares (MLS), hybrid method combining polynomial regression and MLS, Kriging regression, and penalized spline regression, are employed as an approximation of a mechanical model. The advantage of the surrogate models is the high computational efficiency and robust as they can be constructed from a limited amount of available data. - Global sensitivity analysis (SA) methods, such as variance-based methods for models with independent and dependent input parameters, Fourier-based techniques for performing variance-based methods and partial derivatives, elementary effects in the context of local SA, are used to quantify the effects of input parameters and their interactions on the mechanical properties of the PNCs. A bootstrap technique is used to assess the robustness of the global SA methods with respect to their performance. In addition, the probability distribution of mechanical properties are determined by using the probability plot method. The upper and lower bounds of the predicted Young’s modulus according to 95 % prediction intervals were provided. The above-mentioned methods study on the behaviour of intact materials. Novel numerical methods such as a node-based smoothed extended finite element method (NS-XFEM) and an edge-based smoothed phantom node method (ES-Phantom node) were developed for fracture problems. These methods can be used to account for crack at macro-scale for future works. The predicted mechanical properties were validated and verified. They show good agreement with previous experimental and simulations results.

  • Research Article
  • 10.5445/ir/1000076128
Numerical Calculation of Specific Energy Distribution of I-125 in Water with Geant4, Using Different Frequency Distributions
  • Jan 1, 2017
  • B Heide

The specific energy distributions in water caused by I-125 atoms, located in the centre of a sphere with a radius of 5 μm, were calculated using geant4. The dependence of the specific energy on the respective electron frequency distributions was investigated, since there has been a lack of knowledge of the explicit electron frequency distribution up to now. The electron frequency distribution was modelled as Poisson, log-normal, or uniform distribution. Among others, some differences in the specific energy distribution were found. The lowest average specific energy, however, was within the range of the highest average specific energy and vice versa.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.anucene.2014.12.014
Multi-physics and multi-scale benchmarking and uncertainty quantification within OECD/NEA framework
  • Jan 3, 2015
  • Annals of Nuclear Energy
  • M Avramova + 8 more

Multi-physics and multi-scale benchmarking and uncertainty quantification within OECD/NEA framework

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 26
  • 10.1051/epjconf/20134203003
Status of XSUSA for Sampling Based Nuclear Data Uncertainty and Sensitivity Analysis
  • Jan 1, 2013
  • EPJ Web of Conferences
  • W Zwermann + 6 more

\nIn the present contribution, an overview of the sampling based XSUSA method for sensitivity and uncertainty analysis with respect to nuclear data is given. The focus is on recent developments and applications of XSUSA. These applications include calculations for critical assemblies, fuel assembly depletion calculations, and steadystate as well as transient reactor core calculations. The analyses are partially performed in the framework of international benchmark working groups (UACSA – Uncertainty Analyses for Criticality Safety Assessment, UAM – Uncertainty Analysis in Modelling). It is demonstrated that particularly for full-scale reactor calculations the influence of the nuclear data uncertainties on the results can be substantial. For instance, for the radial fission rate distributions of mixed UO2/MOX light water reactor cores, the 2σ uncertainties in the core centre and periphery can reach values exceeding 10%. For a fast transient, the resulting time behaviour of the reactor power was covered by a wide uncertainty band. Overall, the results confirm the necessity of adding systematic uncertainty analyses to best-estimate reactor calculations.\n

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.