Abstract

Diagnostic tests guide physicians in assessment of clinical disease states, just as statistical tests guide scientists in the testing of scientific hypotheses. Sensitivity and specificity are properties of diagnostic tests and are not predictive of disease in individual patients. Positive and negative predictive values are predictive of disease in patients and are dependent on both the diagnostic test used and the prevalence of disease in the population studied. These concepts are best illustrated by study of a two by two table of possible outcomes of testing, which shows that diagnostic tests may lead to correct or erroneous clinical conclusions. In a similar manner, hypothesis testing may or may not yield correct conclusions. A two by two table of possible outcomes shows that two types of errors in hypothesis testing are possible. One can falsely conclude that a significant difference exists between groups (type I error). The probability of a type I error is alpha. One can falsely conclude that no difference exists between groups (type II error). The probability of a type II error is beta. The consequence and probability of these errors depend on the nature of the research study. Statistical power indicates the ability of a research study to detect a significant difference between populations, when a significant difference truly exists. Power equals 1-beta. Because hypothesis testing yields "yes" or "no" answers, confidence intervals can be calculated to complement the results of hypothesis testing. Finally, just as some abnormal laboratory values can be ignored clinically, some statistical differences may not be relevant clinically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call