Abstract
PurposeA comparison of image quality obtained from human observers and software analysis of CT phantom images. Methods and materialsA Catphan®600 CT QA phantom was scanned for: posterior fossa; cerebrum; abdomen and chest on three CT models, as part of a dose optimisation strategy. CT image data sets (n = 24) obtained pre and post optimisation were blindly evaluated by radiographers (n = 8) identifying the number of distinct line pairs and contrast discs for each of the three supra-slice sets within the phantom's high and low contrast resolution modules. The same images were also reviewed using the web based service – Image Owl for automatic analysis of Catphan®600 images. ResultsInter-observer reliability measured using Cronbach's α between human observers and again including software analysis as the 9th observer gave α = 0.97 for both instances, indicating comparable internal consistency with and without software analysis.Results of a paired sample t-test showed no significant difference (p ≥ 0.05) between human observers and software analysis in 37.5% of observations for line pairs and 37.5%; 12.5% and 50% for the sets of contrast discs representing nominal contrast of 1.0%, 0.5% and 0.3% respectively. Software analysis findings improved compared to observer readings as contrast levels reduced. ConclusionCombined use of human observers and software analysis for evaluation of image quality in CT using phantoms is recommended. However the sole use of software analysis may provide more detail than that obtained by human observers. Further research to investigate the clinical relevance of such image quality findings is recommended.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have