Abstract

To study current diagnostic test evaluation, 129 recent articles were assessed against several well-known methodological criteria. Only 68% employed a well-defined "gold standard." Test interpretation was clearly described in only 68% and was stated to be "blind" in only 40%. Approximately 20% used the terms sensitivity and specificity incorrectly. Predictive values were considered in only 31% and the influence of disease prevalence and study setting was considered in only 19%. Overall, 74% failed to demonstrate more than four of seven important characteristics and there was an increased proportion of high specificities reported in this group. Articles assessing new tests reported high sensitivities and specificities significantly more often than articles assessing existing tests. These results indicate a clear need for greater attention to accepted methodological standards on the part of researchers, reviewers, and editors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.