Monte Carlo simulations are the basis of all modern x-ray dosimetry methods in diagnostic radiology. Monte Carlo (MC) methods are different from the larger class of computer simulation techniques in that they explicitly compute stochastic events and track the outcome. For x-ray dosimetry, MC methods track the trajectory of individual x-ray photons, one by one, as they exit an x-ray source, enter the simulated patient anatomy, and undergo scattering and absorption events. The amount of energy deposited in the “patient” is tallied at each location where a dose-deposition interaction takes place. Typically, millions to billions of photon histories are computed and the energy deposited in each volume element (voxel) in the mathematical phantom is then divided by the tissue mass of that voxel, resulting in the absorbed dose for the voxel – defined as imparted energy per unit mass. Modern computers are very fast and billions of photon histories can be realistically simulated to estimate the radiation dose deposited to anatomy from a given radiological imaging application. Despite this large number of simulated photons (e.g. 109), actual x-ray imaging involves 1014 to 1016 photons for each mammographic or CT acquisition, respectively – a factor of 10,000 or more greater than what is possible in most MC experiments. Thus, it is common to also record the air kerma at the entrance of the phantom for radiographic or mammographic applications, or the air kerma at the center of the field for computed tomography applications. In this way, a coefficient representing the absorbed dose per unit air kerma – in the interesting units of mGy/mGy, is computed. In the old days of radiology, these coefficients used different units and were sometimes called the “roentgen to rad conversion factors”. These coefficients allow dose levels in actual imaging procedures to be estimated using physically-measured air kerma levels in the radiography room or CT suite.