Abstract

IntroductionTo compare the clinical chest radiograph (CXR) reports provided by consultant radiologists and reporting radiographers with expert thoracic radiologists. MethodsAdult CXRs (n = 193) from a single site were included; 83% randomly selected from CXRs performed over one year, and 17% selected from the discrepancy meeting. Chest radiographs were independently interpreted by two expert thoracic radiologists (CTR1/2).Clinical history, previous and follow-up imaging was available, but not the original clinical report. Two arbiters compared expert and clinical reports independently. Kappa (Ƙ), Chi Square (χ2) and McNemar tests were performed to determine inter-observer agreement. ResultsCTR1 interpreted 187 (97%) and CTR2 186 (96%) CXRs, with 180 CXRs interpreted by both experts. Radiologists and radiographers provided 93 and 87 of the original clinical reports respectively. Consensus between both expert thoracic radiologists and the radiographer clinical report was 70 (CTR1; Ƙ = 0.59) and 70 (CTR2; Ƙ = 0.62), and comparable to agreement between expert thoracic radiologists and the radiologist clinical report (CTR1 = 76, Ƙ = 0.60; CTR2 = 75, Ƙ = 0.62). Expert thoracic radiologists agreed in 131 cases (Ƙ = 0.48). There was no difference in agreement between either expert thoracic radiologist, when the clinical report was provided by radiographers or radiologists (CTR1 χ = 0.056, p = 0.813; CTR2 χ = 0.014, p = 0.906), or when stratified by inter-expert agreement; radiographer McNemar p = 0.629 and radiologist p = 0.701. ConclusionEven when weighted with chest radiographs reviewed at discrepancy meetings, content of CXR reports from trained radiographers were indistinguishable from content of reports issued by radiologists and expert thoracic radiologists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call