Abstract
OBJECTIVE. Most peer review programs focus on error detection, numeric scoring, and radiologist-specific error rates. The effectiveness of this method on learning and systematic improvement is uncertain at best. Radiologists have been pushing for a transition from an individually punitive peer review system to a peer-learning model. This national questionnaire of U.S. radiologists aims to assess the current status of peer review and opportunities for improvement. MATERIALS AND METHODS. A 21-question multiple-choice questionnaire was developed and face validity assessed by the ARRS Performance Quality Improvement subcommittee. The questionnaire was e-mailed to 17,695 ARRS members and open for 4 weeks; two e-mail reminders were sent. Response collection was anonymous. Only responses from board-certified, practicing radiologists participating in peer review were analyzed. RESULTS. The response rate was 4.2% (742/17,695), and 73.7% (547/742) met inclusion criteria. Most responders were in private practice (51.7%, 283/547) with a group size of 11-50 radiologists (50.5%) and in an urban setting (61.6%). Significant diversity was noted in peer review systems, with RADPEER used by less than half (45.0%) and cases selected most commonly by commercial software (36.2%) or manually (31.2%). There was no consensus on the number of required peer reviews per month (10-20 cases, 32.1%; > 20 cases, 29.1%; < 10 cases, 21.7%). Less than half (43.7%) did not use peer review for group education. Whereas most (67.7%) were notified of their peer review results individually, 21.5% were not notified at all. Around half were dissatisfied (44.5%) because of insufficient learning (94.0%) and inaccurate representation of their performance improvement (75.5%). Overall, the group discrepancy rates were unknown to most radiologists who participate in peer review (54.3%). Submission bias was the main reason for underreporting of serious discrepancies (49.0%). Most found four peer-learning methods feasible in daily practice: incidental observation, 65.1%; focused practice review, 52.9%; professional auditing, 45.8%; and blinded double reading, 35.4%. CONCLUSION. More than half of participants reported that peer review data are used for educational purposes. However, significant diversity remains in current peer review practice with no agreement on number of required reviews, method of case selection, and oversight of results. Nearly half of the radiologists reported insufficient learning, although most feel a better system would be feasible in daily practice.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.