Abstract

Problems with design and statistical evaluation of clinical efficacy trials of antimicrobial agents are reviewed. Of the three major criteria used for evaluating antimicrobial agents (efficacy, toxicity, cost), the most important is efficacy. Clinical efficacy can be evaluated in uncontrolled or controlled clinical trials. Uncontrolled trials are often conducted to satisfy Food and Drug Administration requirements during premarketing testing; the response rate is typically high because only patients with susceptible infections may be treated and large doses are given. Controlled antibiotic trials should be randomized, blinded, parallel comparisons of an investigational agent versus the best available agent at an accepted dose. However, interpretation of these studies is frequently clouded by poor study design, small sample sizes, and heterogeneous patient populations. Controlled trials are usually centered around a null hypothesis (i.e., that no difference will be found between the agents being compared). All conclusions (to reject or not reject the null hypothesis) should be carefully evaluated by clinicians seeking to apply the available data to patient care. Researchers can incorrectly conclude that two therapies have equal efficacy because of insufficient statistical power (i.e., small sample size) or poor study design. Likewise, researchers may incorrectly conclude that there is a statistical difference between two therapies because of poor design or improper sample selection. For the clinician, clinical relevance takes precedence over statistical significance. Before the results of a study are allowed to affect drug use in an institution, strong similarities between subjects and methods in the study and patients and care in the institution should be demonstrated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call