Abstract

The rendition of medical images influences the accuracy and precision of quantifications. Image variations or biases make measuring imaging biomarkers challenging. The objective of this paper is to reduce the variability of computed tomography (CT) quantifications for radiomics and biomarkers using physics-based deep neural networks (DNNs). With the proposed framework, it is possible to harmonize the different renditions of a single CT scan (with variations in reconstruction kernel and dose) into an image that is in close agreement with the ground truth. To this end, a generative adversarial network (GAN) model was developed where the generator is informed by the scanner's modulation transfer function (MTF). To train the network, a virtual imaging trial (VIT) platform was used to acquire CT images, from a set of forty computational models (XCAT) serving as the patient model. Phantoms with varying levels of pulmonary disease, such as lung nodules and emphysema, were used. We scanned the patient models with a validated CT simulator (DukeSim) modeling a commercial CT scanner at 20 and 100 mAs dose levels and then reconstructed the images by twelve kernels representing smooth to sharp kernels. An evaluation of the harmonized virtual images was conducted in four different ways: 1) visual quality of the images, 2) bias and variation in density-based biomarkers, 3) bias and variation in morphological-based biomarkers, and 4) Noise Power Spectrum (NPS) and lung histogram. The trained model harmonized the test set images with a structural similarity index of 0.95±0.1, a normalized mean squared error of 10.2±1.5%, and a peak signal-to-noise ratio of 31.8±1.5 dB. Moreover, emphysema-based imaging biomarkers of LAA-950 (-1.5±1.8), Perc15 (13.65±9.3), and Lung mass (0.1±0.3) had more precise quantifications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call