Abstract

Stratifying candidates objectively on the merit of publication portfolios is an onerous and difficult task. Institutional committees are under increasing pressure to rank applicants based on previous achievements for appointments/promotions, funding, and awards, and must do so within unforgiving time constraints. The journal impact factor (IF) has been loosely adopted in many circles for assessing article "quality," circumventing detailed review of individual articles. The premise supporting such practice often hinges on assuming that high-IF journals are harder to publish in, for example, have higher rejection rates (RRs), and therefore, authors achieving publication in such periodicals should be "recognized" for their achievement. There is no evidence of previous research linking IF and RR. A subset of Institute for Scientific Information (ISI)-listed radiology journals, for which IF data were available, was identified and a direct-contact survey approach (63.3% response rate) used to ascertain journal manuscript RR. Of the sample reviewed, the ISI-listed IF values ranged from 4.759 to 0.056 (mean 1.491), and editor-reported manuscript RRs from 80.0% to 8.0% (mean 47.8%). Statistical comparison of IF and RR using linear regression yielded an r2 value of 0.223. In summary, this study demonstrates poor linear agreement between IF and RR for manuscripts submitted to peer-reviewed radiology journals. This suggests that journal IF is a poor predictor of RR, and vice versa. This finding may be of interest to institutional committees who have adopted the IF as an indicator of merit in reviewing publication curriculum vitae, and may encourage rethinking of currently practiced candidate assessment approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call