Abstract

PurposeTo assess the inter-reader reproducibility of the Prostate Imaging Quality (PI-QUAL) score between readers with varying clinical experience and its reproducibility at assessing imaging quality between different institutions. MethodsFollowing IRB approval, we assessed 60 consecutive prostate MRI scans performed at different academic teaching and non-academic hospitals uploaded to our institutes’ PACS for second opinion or discussion in case conferences. Anonymized scans were independently reviewed using the PI-QUAL scoring sheet by three readers - two radiologists (with 1 and 12 years Prostate MRI reporting experience), and an experienced MRI technician with interest in image acquisition and quality. All readers were blinded to the site where scans were acquired. ResultsAgreement coefficients between the 3 readers in paired comparison for each individual PI-QUAL score was moderate. When the scans were clustered into 2 groups according to their ability to rule in or rule out clinically significant prostate cancer [i.e., PI-QUAL score 1–3 vs PI-QUAL score 4–5], the Gwet AC1 coefficients between the three readers in paired comparison was good to very good [Gwet AC 1:0.77, 0.67, 0.836 respectively] with agreement percentage of 88.3%, 83.3% and 91.7% respectively. Agreement coefficient was higher between the experienced radiologist and the experienced MRI technician than between the less experienced trainee radiologist and the other two readers.The mean PI-QUAL score provided by each reader for the scans was significantly higher in the academic hospitals (n = 32) compared to the community hospital (n = 28) [experienced radiologist 4.6 vs 2.9; trainee radiologist 4.5 vs 2.4; experienced technologist 4.4 vs 2.4; p value < 0.001]. ConclusionWe observed good to very good reproducibility in the assessment of each MRI sequence and when scans were clustered into two groups [PI-QUAL 1–3 vs PI-QUAL 4–5] between readers with varying clinical experience. However, the reproducibility for each single PI-QUAL score between readers was moderate. Better definitions for each PI-QUAL score criteria may further improve reproducibility between readers.Additionally, the mean PI-QUAL score provided by all three readers was significantly higher for scans performed at academic teaching hospitals compared to community hospital.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call